Google: Over 57 Nation-State Threat Groups Using AI for Cyber Operations

Summary:
Google’s Threat Intelligence Group conducted an in-depth analysis of how cyber threat actors interacted with Google’s AI assistant, Gemini, to assess AI’s role in cybersecurity threats. While AI has revolutionized cybersecurity by enhancing secure coding, vulnerability detection, and operational efficiency, it has also raised concerns about its misuse by cybercriminals and state-sponsored actors. GTIG’s findings reveal that AI aids threat actors in various tasks such as research, reconnaissance, scripting, and vulnerability exploration, but there is no evidence of AI enabling novel or game-changing cyberattacks. Instead, AI primarily enhances operational speed and efficiency.


Government-backed Advanced Persistent Threat actors have been observed using AI across multiple stages of the attack lifecycle, including reconnaissance on target organizations, researching vulnerabilities, developing malicious payloads, and scripting for defense evasion.


  • Iranian APT actors were the heaviest users of Gemini, leveraging AI for phishing campaigns, defense-related intelligence gathering, and cyber operations. Specifically, APT42 used Gemini to craft phishing content, conduct reconnaissance on defense experts and organizations, and translate materials for targeted disinformation campaigns. They also researched critical vulnerabilities and sought guidance on exploiting security weaknesses in widely used technologies such as Microsoft Exchange and IoT devices. Additionally, some Iranian actors explored AI’s potential for offensive security, investigating ways to integrate generative AI into red teaming exercises.
  • Chinese APT actors focused on AI-driven reconnaissance, scripting, and post-compromise activities, often mimicking IT administrators in their queries. They sought information on US military organizations, IT service providers, and public databases of US intelligence personnel. Their use of Gemini included generating Active Directory management commands, troubleshooting security bypass techniques, and enabling lateral movement within compromised environments. Some actors attempted to reverse-engineer security tools like Carbon Black EDR to find potential weaknesses. Additionally, they explored ways to add self-signed certificates to Active Directory, upload large files to cloud services, and gain deeper system access post-exploitation. APT41, a well-known Chinese cyberespionage group, unsuccessfully attempted to elicit internal system information from Gemini, including details about its IP addresses, kernel version, and network configuration.
  • North Korean APT actors leveraged Gemini for multiple cyber operations, including job fraud schemes, cybercrime, and intelligence gathering. They extensively researched South Korean and US military targets, cryptocurrency platforms, and major corporations across various sectors. North Korean actors also sought assistance with developing phishing techniques for Gmail, creating data-stealing scripts, and bypassing Google Voice restrictions. Of particular concern, these actors used AI to support North Korea’s ongoing clandestine IT worker scheme, where operatives use fake identities to obtain remote work at Western companies, funneling earnings back to the regime. They asked Gemini for assistance with job applications, salary research, and writing cover letters tailored to job postings. Additionally, they explored AI-generated scripting techniques for payload development, malware evasion, and sandbox detection to improve their cyber attack capabilities.
  • Russian APT actors showed relatively limited engagement with Gemini, which GTIG attributes to operational security concerns or potential reliance on locally hosted AI models. Those that did use Gemini focused on scripting tasks, including rewriting publicly available malware into different programming languages and adding encryption functionality to malicious code. Their low interaction with Gemini suggests that Russian actors may be favoring other generative AI tools, potentially those developed by Russian firms or hosted on infrastructure they control.

Security Officer Comments:
Beyond state-backed hacking groups, Information Operations (IO) actors also attempted to exploit AI for content creation, localization, and influence campaigns. Iranian IO actors were the most prolific users, accounting for three-quarters of all IO-related Gemini activity. They generated biased news articles, manipulated text to fit ideological narratives, and translated political materials to maximize reach. Some Iranian groups focused on crafting content that criticized foreign governments, highlighted Iran’s military capabilities, or promoted Islamic narratives. Additionally, they sought Gemini’s help in refining their social media strategies, optimizing SEO for propaganda websites, and improving engagement tactics.

Suggested Corrections:

Strengthening AI Security
  • Robust Safeguards: AI models like Gemini block malicious queries and filter hacking-related prompts.
  • Jailbreak Prevention: Adversarial testing and red teaming help prevent prompt-injection attacks.
  • Secure AI Framework (SAIF): Google implements input validation, adversarial training, and continuous monitoring.

AI Threat Intelligence & Monitoring
  • Tracking Adversarial AI Use: Google’s GTIG monitors AI interactions by nation-state and cybercriminal groups.
  • Industry Collaboration: Sharing intelligence with CISA, ISACs, and partners to counter AI-driven threats.
  • Threat Attribution: Identifying and analyzing AI usage by Iranian, Chinese, North Korean, and Russian actors.
Hardening AI Models
  • Content Restriction: AI provides neutral or safety-aware responses to harmful queries.
  • Red Teaming & Testing: Continuous evaluations strengthen AI resilience against misuse.
  • Contextual Awareness: AI detects and mitigates phishing, social engineering, and misinformation attempts.
Preventing AI-Enabled Cybercrime
  • Blocking Malicious Use: AI models restrict content that could assist in phishing, malware development, and fraud.
  • Disrupting Criminal AI Tools: Google tracks underground AI-based cybercrime services like FraudGPT and WormGPT.
  • Enhancing Security Policies Ongoing improvements to safeguard AI against evolving threats.

Link(s):
https://thehackernews.com/2025/01/google-over-57-nation-state-threat.html