In late 2022, OpenAI publicly released some of the most sophisticated deep-learning models – DALL-E and Chat GPT. These neural networks rely on machine learning to generate infinite amounts of unique textual and visual content for users anywhere on the planet. OpenAI may have been the first company to release its products to the public, but it is not alone in its development; companies like NVIDIA, Google, and smaller artificial intelligence startups are developing similar engines. These generative AI models allow users to input commands to create essays, music lyrics, simple code and more. In January 2023, OpenAI, the Stanford Internet Observatory, and Georgetown University’s Center for Security and Emerging Technology released a study that explored the possibility of these models being used in influence campaigns by both state and non-state actors through the production of disinformation. The disruptive potential posed by these generative AI technologies has led some to consider them “weapon[s] of mass disruption.”
Over the past decade, extremist groups have been adapting their propaganda to be more interactive. Extremist video games, social media content, and music have found their way onto a variety of internet platforms. Unlike the extremist propaganda of the past, these new digital media products allow extremist groups to interact with audiences in unprecedented ways. This Insight will focus on the emergence of new digital AI-generated extremist propaganda. By simulating a variety of extremist content using AI generative models, the authors predict that this emerging technology may enable and accelerate the production of a greater quantity and quality of digital propaganda manufactured by non-state extremist actors.
Read more at the Global Network on Extremism and Technology