Nation-State Hackers Abuse Gemini AI Tool
Summary:
Rapid advancements in AI are revealing new possibilities for the way humans work and accelerating innovation in science, technology, and more. AI is at the cusp of revolutionizing security and defense. With capabilities to sift through complex telemetry, secure coding, improve vulnerability discovery, and modernize operations. However, anxieties regarding the potential of AI are exacerbated by the misuse of AI tools by threat actors for their malicious activity. In this report, are findings that outline nation-state threat actor utilization of Google’s Gemini web application. Google observed threat actors using publicly available jailbreak prompts to unsuccessfully bypass Gemini safety controls but did not observe any machine-learning-focused threats using tailored prompts. They witnessed a trend of threat actors experimenting with Gemini to enable and assist operations.
At present, generative AI has increased cybercriminals’ productivity, potentially allowing operations to be performed on a larger scale, but adversaries have not been able to develop novel capabilities with its assistance according to Google. One of the main functionalities of Gemini that Google observed it being used for is performing research on potential attack infrastructure and reconnaissance on organizations that are potential targets, supporting much of the cyberattack lifecycle. Iranian IO actors were the heaviest users of Gemini, accounting for three-quarters of all use by information operations (IO) actors. North Korean APT actors did something similar by using Gemini to draft cover letters and research jobs, activities that would likely support efforts by North Korean nationals to use fake identities and obtain freelance and full-time jobs at foreign companies while concealing their true identities and locations. They were observed using Gemini for developing personas and messaging localization to increase the reach of their campaigns. Threat actors have even been seen attempting to use Gemini to research techniques for Gmail client-specific phishing and coding information stealer malware that targets Chrome browsers. Rather than enabling disruptive change as some may think, the main benefit of generative AI is that it allows threat actors to conduct operations faster and at higher volume. For more skilled adversaries, generative AI provides a solid framework for operations, potentially replacing one of the use cases for Metasploit or Cobalt Strike.
Security Officer Comments:
AI chatbots that are specifically designed for malware development underscore the concerns about the misuse of powerful AI tools. While much of the current discourse surrounding the adversarial use of AI is based on theoretical research, some of these studies only partially reflect the threat actor use cases observed in the wild. It is important to keep in mind that while AI can be a helpful tool for an adversary, currently, it is not the game-changer that it is often portrayed to be, as Google has not witnessed any indication that generative AI can develop novel capabilities or methods for attackers. These attackers have been seen primarily incorporating Gemini for research and code troubleshooting. The purpose of this analysis from Google about their proprietary AI assistant Gemini is to synthesize the ideas from theoretical research and real-world observations of attack techniques, thereby helping raise awareness and creating a more secure community overall. Community-based intelligence and information sharing is pivotal to the private sector, governments, and other stakeholders maximizing the benefits of AI in security while reducing the risks of adversaries abusing it.
Suggested Corrections:
DeepFakes:
1. Mastering Skepticism:
https://www.infosecurity-magazine.com/news/nation-state-abuse-gemini-ai/
https://cloud.google.com/blog/topics/threat-intelligence/adversarial-misuse-generative-ai
Rapid advancements in AI are revealing new possibilities for the way humans work and accelerating innovation in science, technology, and more. AI is at the cusp of revolutionizing security and defense. With capabilities to sift through complex telemetry, secure coding, improve vulnerability discovery, and modernize operations. However, anxieties regarding the potential of AI are exacerbated by the misuse of AI tools by threat actors for their malicious activity. In this report, are findings that outline nation-state threat actor utilization of Google’s Gemini web application. Google observed threat actors using publicly available jailbreak prompts to unsuccessfully bypass Gemini safety controls but did not observe any machine-learning-focused threats using tailored prompts. They witnessed a trend of threat actors experimenting with Gemini to enable and assist operations.
At present, generative AI has increased cybercriminals’ productivity, potentially allowing operations to be performed on a larger scale, but adversaries have not been able to develop novel capabilities with its assistance according to Google. One of the main functionalities of Gemini that Google observed it being used for is performing research on potential attack infrastructure and reconnaissance on organizations that are potential targets, supporting much of the cyberattack lifecycle. Iranian IO actors were the heaviest users of Gemini, accounting for three-quarters of all use by information operations (IO) actors. North Korean APT actors did something similar by using Gemini to draft cover letters and research jobs, activities that would likely support efforts by North Korean nationals to use fake identities and obtain freelance and full-time jobs at foreign companies while concealing their true identities and locations. They were observed using Gemini for developing personas and messaging localization to increase the reach of their campaigns. Threat actors have even been seen attempting to use Gemini to research techniques for Gmail client-specific phishing and coding information stealer malware that targets Chrome browsers. Rather than enabling disruptive change as some may think, the main benefit of generative AI is that it allows threat actors to conduct operations faster and at higher volume. For more skilled adversaries, generative AI provides a solid framework for operations, potentially replacing one of the use cases for Metasploit or Cobalt Strike.
Security Officer Comments:
AI chatbots that are specifically designed for malware development underscore the concerns about the misuse of powerful AI tools. While much of the current discourse surrounding the adversarial use of AI is based on theoretical research, some of these studies only partially reflect the threat actor use cases observed in the wild. It is important to keep in mind that while AI can be a helpful tool for an adversary, currently, it is not the game-changer that it is often portrayed to be, as Google has not witnessed any indication that generative AI can develop novel capabilities or methods for attackers. These attackers have been seen primarily incorporating Gemini for research and code troubleshooting. The purpose of this analysis from Google about their proprietary AI assistant Gemini is to synthesize the ideas from theoretical research and real-world observations of attack techniques, thereby helping raise awareness and creating a more secure community overall. Community-based intelligence and information sharing is pivotal to the private sector, governments, and other stakeholders maximizing the benefits of AI in security while reducing the risks of adversaries abusing it.
Suggested Corrections:
DeepFakes:
1. Mastering Skepticism:
- Question Everything: Always ask who created content, what their agenda is, and if it seems too good/bad to be true.
- Fact-Check: Verify information from multiple sources including reputable news outlets and experts.
- Spot Inconsistencies: Look for unnatural movements, lip-syncing mismatches, or strange lighting in media.
- Use Reverse Image Search: Upload suspicious images to tools like TinEye or Google Image Search.
- Utilize Forensic Tools: Access specialized software to analyze pixels, audio frequencies, and technical aspects.
- Consult Experts: Seek assistance from fact-checking organizations or cybersecurity professionals for harmful deepfakes.
- Join Online Detective Communities: Participate in forums dedicated to debunking deepfakes and share findings.
- Be Mindful of Sharing: Avoid spreading misinformation by verifying content before sharing, especially on social media.
- Strengthen Online Presence: Use strong passwords, enable multi-factor authentication, and limit personal information online.
- Support Ethical AI Development: Advocate for ethical guidelines and regulations to prevent misuse of deepfake technology.
https://www.infosecurity-magazine.com/news/nation-state-abuse-gemini-ai/
https://cloud.google.com/blog/topics/threat-intelligence/adversarial-misuse-generative-ai