close
close

Mondor Festival

News with a Local Lens

FBI Warns GenAI Boosts Financial Fraud
minsta

FBI Warns GenAI Boosts Financial Fraud

The FBI has warned that criminals are using generative AI to enhance financial fraud schemes, and the Bureau has issued new guidance to the public to protect against these tactics.

A new alert from the US government agency’s Internet Crime Complaint Center (IC3) has highlighted how these tools enable bad actors to commit fraud on a larger scale and increase the credibility of their schemes.

GenAI-based tactics include impersonating victims’ relatives to demand ransom payment and gain access to bank accounts.

Read now: Report shows AI fraud and deepfakes top challenges for banks

How GenAI tools are used to facilitate fraud

The FBI observed that GenAI tools were used to combat fraud in several ways.

Create more realistic written messages

Criminals use tools like OpenAI’s ChatGPT to improve written messages for social engineering attackssuch as romance and investment scams.

The tools facilitate language translation to limit grammatical or spelling errors by foreign criminals targeting U.S. citizens. This removes human errors that could otherwise serve as warning signs of fraud.

Messages can also be created more quickly, reaching a wider audience.

Additionally, AI-based chatbots are integrated into fraudulent websites to trick victims into clicking on malicious links.

The FBI added that GenAI allows fraudsters to create large fictitious social media profiles that trick victims into sending money.

Generate fake images

Criminals use AI-generated images to create credible social media profile photos, identity documents, and other images to support their fraudulent schemes.

This includes producing photos to share with victims during private communications to convince them they are talking to a real person.

Other common uses of AI-generated images include creating images of celebrities or social media personas to promote counterfeit products.

There is also evidence that pornographic photos of GenAI victims were used to demand payment. sextortion programs.

Voice and video impersonation of individuals

The FBI said deepfake technology is now frequently used to clone the voices and videos of individuals to commit major fraud schemes.

This includes generating short audio clips to pose as a close relative of a victim and ask for immediate financial assistance or demand a ransom.

Another example shows criminals attempting to circumvent verification controls and gain access to bank accounts by finding audio clips of individuals and impersonating them using AI.

AI-generated videos are also used to hold real-time video chats with alleged company executives and other authority figures to incentivize employees to make payments.

How to defend against AI-generated scams

The FBI has issued guidelines for the public to detect these types of AI-generated scams:

  • Create a secret word or phrase with your family to verify their identity
  • Look for subtle imperfections in images and videos, such as distorted hands or feet
  • Listen carefully to tone and word choice to distinguish between a legitimate phone call from a loved one and an AI-generated voice clone.
  • Limit online content of your image or voice, make social media accounts private, and limit followers to people you know.
  • Verify the identity of the person calling you by hanging up, searching for the contact of the bank or organization claiming to be calling you, and dialing the phone number directly.
  • Never share sensitive information with people you only met online or by phone
  • Don’t send money, gift cards, cryptocurrency, or other assets to people you don’t know.