Cybercriminals Train AI Chatbots for Phishing, Malware Attacks

Cyber Security Threat Summary:
“In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard. Both AI-powered bots are the work of the same individual, who appears to be deep in the game of providing chatbots trained specifically for malicious purposes ranging from phishing and social engineering, to exploiting vulnerabilities and creating malware” (Bleeping Computer, 2023).

First seen on July 25th, FraudGPT has been seen advertised on various hacker forums by someone with the username CanadianKingpin12. The tool is stated to help fraudsters, hackers, and spammers carry out attacks. Researchers from SlashNext, shared intelligence surrounding CanadianKingpins12’s methods for training chatbots to use unrestricted data sets sourced from the dark web, or by using LLMs developed to fight cybercrime.

Security Officer Comments:
In private conversations, CanadianKingpin12 said that they were working on DarkBART - a "dark version" of Google's conversational generative artificial intelligence chatbot. The threat actors also allegedly has access to another LLM called DarkBERT, which was developed by South Korean researchers and trained on dark web data to fight cybercrime. “DarkBERT is available to academics based on relevant email addresses but SlashNext highlights that this criteria is far from a challenge for hackers or malware developers, who can get access to an email address from an academic institution for around $3” (Bleeping Computer, 2023).

SlashNext researchers shared that CanadianKingpin12 said that the DarkBERT bot is "superior to all in a category of its own specifically trained on the dark web." The malicious version has been tuned for:

  • Creating sophisticated phishing campaigns that target people's passwords and credit card details
  • Executing advanced social engineering attacks to acquire sensitive information or gain unauthorized access to systems and networks.
  • Exploiting vulnerabilities in computer systems, software, and networks.
  • Creating and distributing malware.
  • Exploiting zero-day vulnerabilities for financial gain or systems disruption.
As CanadianKingpin12 said in private messages with the researchers, both DarkBART and DarkBERT will have live internet access and seamless integration with Google Lens for image processing. It is unclear if CanadianKingpin12 modified the code in legitimate version of DarkBERT or just obtained access to the model and simply leveraged it for malicious use.

Suggested Correction(s):
No matter the origin of DarkBERT and the validity of the threat actor's claims, the trend of using generative AI chatbots is growing and the adoption rate is likely to increase, too, as it can provide an easy solution for less capable threat actors or for those that want to expand operations to other regions and lack the language skills. “With hackers already having access to two such tools that can assist with executing advanced social engineering attacks and their development in less than a month, "underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape," SlashNext researchers believe.