Header image

Session 2.6e Update: Large Language Models (LLMs) and Generative AI Powered Information Warfare: How to Customize Highly Nuanced But Convincing Misinformation and Disinformation, and How to Mitigate them.

Tracks
Wednesday, November 13, 2024
1:30 PM - 2:30 PM
Sutherland Theatre

Details


In the evolving landscape of information warfare, the roles of Large Language Models (LLMs) and Generative AI have become increasingly significant. This presentation introduces the potent application of such technologies in generating and mitigating misinformation and disinformation within the contexts of government elections and international relations, with notable examples including the tensions between Ukraine and Russia, and between Palestine supporters and Israel supporters. The ease with which Generative AI can produce vast quantities of convincing, ideologically charged content poses a unique challenge to the integrity of public discourse.
Our research introduces a novel algorithm that manipulates LLMs to produce content with a tailored blend of accuracy and fabrication. This method enables the precise control of the ideological bent of the generated information, allowing for the strategic manipulation of public opinion through semantic and ideological shifts. Such capability signifies a paradigm shift in information warfare, enabling actors to saturate the information space with targeted narratives that are challenging to detect, censor, or counter.
The presentation will further explore the ethical and regulatory dilemmas posed by the use of LLMs in such capacities, emphasizing the urgent need for robust countermeasures. We will discuss sophisticated strategies to identify and neutralize misinformation and disinformation, alongside the broader implications of these technologies on societal trust and the complexity of public perception. This discourse aims to shed light on the dual-edged nature of Generative AI in the modern information warfare domain, advocating for a balanced approach to harnessing its potential while safeguarding against its perils.


Speaker

Agenda Item Image
Dr Timothy Mcintosh
Generative Ai & Cybersecurity Research Strategist
Cyberoo Pty Ltd

Biography

Tim McIntosh is the Generative AI & Cybersecurity Research Strategist at Cyberoo Pty Ltd, and an adjunct lecturer in cyber security at La Trobe University, Melbourne, Australia. Tim completed his PhD in cyber security at La Trobe University. Before joining Cyberoo, he worked as a Security Operations Centre (SOC) analyst at CyberCX, and the Course Coordinator of Australia's first GRC-based Cyber Security degree at Academies Australasia Polytechnic. He is an IAPP CIPP/E and CIPT, an (ISC)2 CISSP and CSSLP, a CompTIA CASP+ and CySA+, a Microsoft Certified Cybersecurity Architect Expert, and a Microsoft Certified Solution Developer (App Builder). His research focuses on two main areas: ransomware mitigation, and the cybersecurity implications of LLMs (Large Language Models) to improve cyber defense and education. His current research projects include applying generative AI for sophisticated scam detection, inducing the semantic instability of generative AI, and exploring proper containment of the theoretical AGI (Artificial General Intelligence). Tim has published as the first author in several highly-ranked academic journals, including Future Generation Computer Systems, Computers & Security, ACM Computing Surveys, IEEE Transactions on Artificial Intelligence, and IEEE Transactions on Cognitive and Developmental Systems.
loading