Home M3AAWG Blog Examining the Risks, Security and Regulation of Generative AI
Posted by the M3AAWG Content Manager

Authored by Artificial Intelligence: Uses, Abuses and Privacy Initiative Champions

Last November, ChatGPT - a large language model (LLM) artificial intelligence (AI) chatbot- amazed us with its humanlike ability to converse on a wide range of topics, teach us new skills, and help our kids with their homework (sometimes a little too much). And then, in March, OpenAI released GPT-4, stunning researchers and the public with a sudden and dramatic leap toward the frontier of Artificial General Intelligence (AGI). A flurry of open and closed source competitors to OpenAI’s models have emerged, presenting new capabilities, and sparking new concerns about the potentially negative impacts of these powerful new technologies.

AI Brings Risks: Online Deception

While the capabilities of LLMs are considerable, they also open the door to potential dangers, such as the automation of online deception. For example, disinformation campaigns typically involve significant human effort. Attackers must study their targets and carefully craft social media campaigns to match the target’s topics of interest. The human effort behind disinformation campaigns has historically limited these campaigns to remarkably high-value targets such as election interference by foreign governments.

A Demonstration in Online Deception Using AI

Using LLMs, attackers can automate both the research and execution of a disinformation campaign. In his talk at the Dublin meeting, MailChannels CEO Ken Simpson demonstrated how an open-source tool called AutoGPT could craft a social media campaign to convince the leadership of M3AAWG to refrain from regulating artificial intelligence. Powered by OpenAI’s GPT-4 model, in just a few minutes, AutoGPT successfully identified the leaders of M3AAWG, researched reasons why governments should not regulate AI, and then crafted three convincing Tweets:

AI-created tweet:
AI is revolutionizing industries and solving global problems. Over-regulation could stifle innovation and hinder progress. Let's embrace the potential of AI! #AI #Innovation @M3AAWG

To nail the point home, Simpson then asked AutoGPT to execute the opposite disinformation campaign, after which it dutifully came up with three Tweets arguing for the strict regulation of AI:

Disinformation AI created tweet:
📣 M3AAWG Directors: It's time to act! We need strict regulations and oversight to ensure the safe development and deployment of AI technologies. #AIExtinction #RegulateAI"

Other topics covered during this session included an overview of securing the distributed infrastructure that runs AI models, methods of securing AI models themselves, and the risks of adversarial learning, data poisoning and model extraction in AI systems. The sessions concluded with a talk reviewing the quickly developing landscape of AI regulation. 

Mitigating the Risks of AI

Recognizing the risks of AI is a first step. The next is taking proactive measures to protect ourselves. M3AAWG has created an AI/ML (machine learning) initiative to examine the application of AI/ML to the field of messaging abuse and cybersecurity, understand the risks and privacy implications of AI/ML, and finally to track and engage regulatory bodies regarding the use/abuse of AL/ML.

Strengthening the security of AI systems, creating regulatory measures, and promoting ethical guidelines are crucial steps toward maximizing the benefits and minimizing AI-related risks. As the premier global organization fighting online abuse, M3AAWG is in the ideal position to research and make recommendations regarding the safe use of AI and the defense against the misuse of AI.

Join us in Brooklyn to discuss how M3AAWG can help protect humanity from AI-assisted abuse.


The views expressed in DM3Z are those of the individual authors and do not necessarily reflect M3AAWG policy.