Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud

Summary:
Generative Artificial Intelligence (AI) has paved the way for criminals to commit fraud at a scale larger than ever before. “Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud,” notes the FBI in a new advisory.

According to the FBI, criminals are increasingly leveraging AI-generated content to amplify their fraudulent schemes. Notably, AI-generated text is being used to create convincing fake social media profiles, messages, and fraudulent websites for scams such as romance, investment, and confidence fraud. The absence of grammatical errors in AI-generated text makes these fake profiles and sites more convincing, increasing the likelihood of victims falling for these scams. Additionally, AI tools are being used to generate realistic images, such as fake IDs and social media photos, to facilitate identity theft and impersonation. In some cases, criminals have even used AI to produce images of celebrities or social media influencers to promote counterfeit products or non-delivery schemes. AI-generated audio and video are also being employed to impersonate loved ones or public figures, with criminals creating highly convincing clips to manipulate victims into sending money.

Analyst Comments:
AI is still in the early stages of development, with significant potential for growth. Although the technology has existed for years, the release of AI services like ChatGPT has brought it into the spotlight, leading to widespread awareness and popularity. A growing number of specialized AI tools are now available, each designed for tasks such as writing emails, generating realistic images, or creating videos that impersonate individuals or personas. While these models are far from perfect, they are expected to improve over time, thanks to algorithms that learn and adapt based on user input and feedback. Although this progress holds promise, it also presents a concerning possibility: criminals may exploit AI to rapidly generate more convincing but error-prone scams at an unprecedented scale.

Suggested Corrections:
FBI tips to protect yourself from AI related scams:

  • Create a secret word or phrase with your family to verify their identity.
  • Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic teeth or eyes, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, lag time, voice matching, and unrealistic movements.
  • Listen closely to the tone and word choice to distinguish between a legitimate phone call from a loved one and an AI-generated vocal cloning.
  • If possible, limit online content of your image or voice, make social media accounts private, and limit followers to people you know to minimize fraudsters' capabilities to use generative AI software to create fraudulent identities for social engineering.
  • Verify the identity of the person calling you by hanging up the phone, researching the contact of the bank or organization purporting to call you, and call the phone number directly.
  • Never share sensitive information with people you have met only online or over the phone.
  • Do not send money, gift cards, cryptocurrency, or other assets to people you do not know or have met only online or over the phone.

Link(s):
https://www.ic3.gov/PSA/2024/PSA241203