Header image

Session 3.7d Update: From Code to Westworld-like Crises: A Strategy for Containing Emerging Threats of Radicalized Robotics and Artificial General Intelligence (AGI)

Tracks
Thursday, November 14, 2024
2:30 PM - 3:30 PM
Nicholls Theatre

Details


As we venture into an era where Artificial Intelligence (AI) starts to mirror scenes from Sci-Fi sagas, the unveiling of the "Figure 01" robotic prototype by Figure AI and OpenAI marks a significant milestone. This prototype, a robot capable of reasoning and unsupervised learning from human actions without direct guidance, signals that the dawn of Artificial General Intelligence (AGI) and autonomous robots is not a distant future, but an immediate reality. This, together with the rapid advancement of Large Language Models (LLMs) and generative AI, has reminded us that the time to address the potential for robotic and AGI threats is today. We believe robotic security and AI security should form part of cyber security and military security.
Drawing from our research on containing both AGI and radicalized robotics, we investigated the emerging challenges posed by potential self-aware AI and robots capable of independently controlling vital infrastructures. Our studies defined robotic radicalization as a combination of malice, autonomy, and lethality, elements that significantly amplify the threat level. Unchecked, this radicalization could manifest into real-life "Westworld"-like disasters where advanced robots turn against their human creators. In response, we introduced the "robotic kill chain", derived from the well-established "Cyber Kill Chain" model, adapted to preempt and neutralize threats from radicalized AGI and robotics. Our frameworks considered radicalized robots as dual threats: malware in their digital essence and terrorists in their physical form. By applying game theory, we offered a methodology to predict and combat the radicalization trajectory of these entities.


Speaker

Agenda Item Image
Dr Timothy Mcintosh
Generative Ai & Cybersecurity Research Strategist
Cyberoo Pty Ltd

Biography

Dr. Tim McIntosh is the Generative AI & Cybersecurity Research Strategist at Cyberoo Pty Ltd, and an adjunct lecturer in cyber security at La Trobe University, Melbourne, Australia. Tim completed his PhD in cyber security at La Trobe University. Before joining Cyberoo, he worked as a Security Operations Centre (SOC) analyst at CyberCX, and the Course Coordinator of Australia's first GRC-based Cyber Security degree at Academies Australasia Polytechnic. He is an IAPP CIPP/E and CIPT, an (ISC)2 CISSP and CSSLP, a CompTIA CASP+ and CySA+, a Microsoft Certified Cybersecurity Architect Expert, and a Microsoft Certified Solution Developer (App Builder). His research focuses on two main areas: ransomware mitigation, and the cybersecurity implications of LLMs (Large Language Models) to improve cyber defense and education. His current research projects include brainwashing LLMs, inducing the semantic instability of generative AI, and exploring proper containment of the theoretical AGI (Artificial General Intelligence). Tim has published as the first author in several highly-ranked academic journals, including Future Generation Computer Systems, Computers & Security, ACM Computing Surveys, IEEE Transactions on Artificial Intelligence, and IEEE Transactions on Cognitive and Developmental Systems.
loading