Microsoft Exposes LLMjacking Cybercriminals Behind Azure AI Abuse Scheme
Summary:
Microsoft has updated its civil lawsuit to name four individuals responsible for developing and distributing tools that bypass security measures in generative AI services, including Microsoft’s Azure OpenAI Service. The legal action aims to halt their operations, dismantle their cybercriminal enterprise, and deter others from misusing AI technology. The named defendants—Arian Yadegarnia ("Fiz") from Iran, Alan Krysiak ("Drago") from the UK, Ricky Yuen ("cg-dot") from Hong Kong, and Phát Phùng Tấn ("Asakuri") from Vietnam—are key figures within Storm-2139, a global cybercrime network. These actors illegally accessed AI services by exploiting exposed customer credentials scraped from public sources, then altered the services’ capabilities to generate illicit content, including non-consensual intimate images of celebrities. They resold access to other malicious actors, providing instructions on creating harmful synthetic media.
The lawsuit builds on Microsoft’s initial legal action in December 2024, when the company filed a complaint in the Eastern District of Virginia against ten unidentified "John Doe" defendants for violating U.S. law and Microsoft’s Acceptable Use Policy. The investigation has since revealed that Storm-2139 operates through a structured hierarchy: creators develop the illicit tools, providers distribute them with different service tiers, and end users leverage them to generate harmful synthetic content. Microsoft has identified additional members in the U.S., specifically in Illinois and Florida, but has withheld their identities pending further criminal investigations.
In response to the lawsuit, the court issued a temporary restraining order and a preliminary injunction, enabling Microsoft to seize a key website that facilitated Storm-2139’s operations. This action disrupted the group's ability to operate and caused immediate internal conflict among its members, with some blaming each other for the exposure. Online discussions in Storm-2139’s monitored channels revealed frustration and speculation about the lawsuit’s impact, with one message referencing a leaked LinkedIn profile connecting "Fiz" to his real identity. The disruption also led some members to retaliate by doxing Microsoft’s legal team, posting names, personal details, and even photographs, a tactic often used to intimidate and harass.
Security Officer Comments:
Additionally, Microsoft’s counsel received emails from suspected Storm-2139 members attempting to deflect blame onto others, providing details about illegal activities, including the sale of stolen Azure keys and proxy software used to exploit AI services. Some emails identified specific individuals involved in these activities and warned of ongoing financial fraud, suggesting that Storm-2139’s operations had siphoned millions of dollars from Microsoft’s cloud services.
Suggested Corrections:
Microsoft remains committed to combating the abuse of generative AI and ensuring its responsible use. The company has implemented advanced security guardrails and continues to refine its approach to preventing the misuse of AI-generated content. It has also advocated for stronger legal frameworks by publishing a whitepaper recommending updates to U.S. criminal law to help law enforcement address emerging cyber threats. Furthermore, Microsoft has expanded its efforts to combat intimate image abuse, reinforcing measures to protect users from both synthetic and real harmful content. While disrupting cybercriminal networks is an ongoing challenge, Microsoft’s legal actions against Storm-2139 demonstrate its persistence in holding malicious actors accountable and setting a precedent for the responsible use of AI technology.
Link(s):
https://thehackernews.com/2025/02/microsoft-exposes-llmjacking.html
Microsoft has updated its civil lawsuit to name four individuals responsible for developing and distributing tools that bypass security measures in generative AI services, including Microsoft’s Azure OpenAI Service. The legal action aims to halt their operations, dismantle their cybercriminal enterprise, and deter others from misusing AI technology. The named defendants—Arian Yadegarnia ("Fiz") from Iran, Alan Krysiak ("Drago") from the UK, Ricky Yuen ("cg-dot") from Hong Kong, and Phát Phùng Tấn ("Asakuri") from Vietnam—are key figures within Storm-2139, a global cybercrime network. These actors illegally accessed AI services by exploiting exposed customer credentials scraped from public sources, then altered the services’ capabilities to generate illicit content, including non-consensual intimate images of celebrities. They resold access to other malicious actors, providing instructions on creating harmful synthetic media.
The lawsuit builds on Microsoft’s initial legal action in December 2024, when the company filed a complaint in the Eastern District of Virginia against ten unidentified "John Doe" defendants for violating U.S. law and Microsoft’s Acceptable Use Policy. The investigation has since revealed that Storm-2139 operates through a structured hierarchy: creators develop the illicit tools, providers distribute them with different service tiers, and end users leverage them to generate harmful synthetic content. Microsoft has identified additional members in the U.S., specifically in Illinois and Florida, but has withheld their identities pending further criminal investigations.
In response to the lawsuit, the court issued a temporary restraining order and a preliminary injunction, enabling Microsoft to seize a key website that facilitated Storm-2139’s operations. This action disrupted the group's ability to operate and caused immediate internal conflict among its members, with some blaming each other for the exposure. Online discussions in Storm-2139’s monitored channels revealed frustration and speculation about the lawsuit’s impact, with one message referencing a leaked LinkedIn profile connecting "Fiz" to his real identity. The disruption also led some members to retaliate by doxing Microsoft’s legal team, posting names, personal details, and even photographs, a tactic often used to intimidate and harass.
Security Officer Comments:
Additionally, Microsoft’s counsel received emails from suspected Storm-2139 members attempting to deflect blame onto others, providing details about illegal activities, including the sale of stolen Azure keys and proxy software used to exploit AI services. Some emails identified specific individuals involved in these activities and warned of ongoing financial fraud, suggesting that Storm-2139’s operations had siphoned millions of dollars from Microsoft’s cloud services.
Suggested Corrections:
Microsoft remains committed to combating the abuse of generative AI and ensuring its responsible use. The company has implemented advanced security guardrails and continues to refine its approach to preventing the misuse of AI-generated content. It has also advocated for stronger legal frameworks by publishing a whitepaper recommending updates to U.S. criminal law to help law enforcement address emerging cyber threats. Furthermore, Microsoft has expanded its efforts to combat intimate image abuse, reinforcing measures to protect users from both synthetic and real harmful content. While disrupting cybercriminal networks is an ongoing challenge, Microsoft’s legal actions against Storm-2139 demonstrate its persistence in holding malicious actors accountable and setting a precedent for the responsible use of AI technology.
Link(s):
https://thehackernews.com/2025/02/microsoft-exposes-llmjacking.html