Threat Actors are Interested in Generative AI, but Use Remains Limited

Cyber Security Threat Summary:
Since at least 2019, Mandiant has tracked threat actor interest in, and use of, AI capabilities to facilitate a variety of malicious activity. Based on our own observations and open source accounts, adoption of AI in intrusion operations remains limited and primarily related to social engineering.

In contrast, information operations actors of diverse motivations and capabilities have increasingly leveraged AI-generated content, particularly imagery and video, in their campaigns, likely due at least in part to the readily apparent applications of such fabrications in disinformation. Additionally, the release of multiple generative AI tools in the last year has led to a renewed interest in the impact of these capabilities.

We anticipate that generative AI tools will accelerate threat actor incorporation of AI into information operations and intrusion activity. Mandiant judges that such technologies have the potential to significantly augment malicious operations in the future, enabling threat actors with limited resources and capabilities, similar to the advantages provided by exploit frameworks including Metasploit or Cobalt Strike. And while adversaries are already experimenting, and we expect to see more use of AI tools over time, effective operational use remains limited” (Mandiant, 2023).

Security Officer Comments:
The use of AI-generated content in information operations enhances the believability of fake visuals and audio, making it a potent tool for shaping political narratives. The growing availability of AI tools for creating realistic fake images and videos suggests an ongoing trend towards AI-driven disinformation in modern warfare. In Mandiant’s blog post, they say that, “Nation-states like Russia, China, Iran, and others use generative adversarial networks to create realistic profile photos for fake personas on social media. Non-state actors also misuse AI-generated images for malicious purposes. Another AI technique, text-to-image models, accepts text inputs and creates corresponding images. These tools are expected to grow as more powerful options emerge. Image-based tools pose a greater deceptive threat compared to text-based AI, often becoming preferred for disinformation. More powerful tools also lead to authentic-looking fake videos created using AI-manipulated video technology, including superimposing faces onto existing videos” (Mandiant, 2023).

Suggested Correction(s):
To counter AI-driven threats, organizations should use AI for defense. Advanced AI systems help spot unusual activities and changing attack methods quickly. AI-assisted responses speed up understanding threats and making decisions. Adaptive AI-based access controls enhance security without causing issues for regular users. AI tools also help better tackle phishing and focus on important security issues. Sharing threat info, teaching users about AI threats, and ethical AI research also make defenses stronger. Using AI for defense helps organizations protect their networks from the same threats that attackers use.