AI Deepfake Scams Surge: Over $200 Million Lost in Just Three Months

Published

eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

It was once said, “Seeing is believing.” But in 2025, that no longer holds true.

In recent months, deepfakes — fake but highly realistic videos, images, or voice recordings created using artificial intelligence — have rapidly grown in number, sophistication, and impact. 

What was once a novelty has now become a serious threat to personal privacy, business integrity, and national security. While public figures remain frequent targets, a troubling shift is happening: everyday people, including children and women, are increasingly becoming victims, often for harassment, blackmail, or school-based humiliation.

$200M lost in three months…and that number could grow

A report by Resemble AI, titled “Q1 2025 Deepfake Incident Report,” paints a worrying picture of how deepfake crimes are evolving. Analyzing 163 documented incidents between January and April 2025, researchers found that scammers are not only chasing big money, they’re now also going after personal reputations and mental well-being.

The financial toll is high. “Documented financial losses from deepfake-enabled fraud exceed $200 million in Q1 2025 alone,” the report noted. And those numbers could grow. 

Ordinary people are now the main target

While celebrities and politicians still account for 41% of all deepfake victims, the Resemble AI report highlights a shift in focus: private citizens now make up 34% of victims, with educational institutions and women being especially vulnerable. 

The study also found that non-consensual explicit content accounted for 32% of all cases, the highest among all uses, followed by financial fraud (23%), political manipulation (14%), misinformation and disinformation (13%), identity theft (10%), and other Purposes (8%).

These synthetic videos are often used for revenge, blackmail, or online harassment. Victims report serious psychological harm, with limited legal recourse or platform accountability.

$25M lost in a single deepfake scam

Imagine a video call where your boss asks you to transfer money, but it’s actually a deepfake. This isn’t a far-fetched scenario.

In February 2024, a company in Singapore lost $25 million because an employee fell for a deepfake impersonation of their chief financial officer and other top executives. The employee, believing it was a genuine request, sent funds to a fraudulent account.

Another alarming case saw a deepfake actor from North Korea tricking KnowBe4, a cybersecurity company, into hiring them in the latter half of 2024. If a cybersecurity company can be fooled, it shows just how convincing these fakes can be.

Deepfakes are beating our defenses

The latest deepfakes are incredibly realistic. Some need just 3–5 seconds of voice audio to clone a person’s voice with 85% accuracy. Facial manipulations are now so refined that 68% of video deepfakes can’t be told apart from real footage. And with new tools that combine audio and video, even live conversations can be faked.

Adding to the problem, many of these fakes are designed to avoid detection, making it hard for existing security systems to catch them.

Legislative response: US passes the ‘Take It Down Act’

The US House voted overwhelmingly, 409 to 2, to pass the “Take It Down Act”, aimed at tackling both real and AI-generated revenge porn. The bipartisan bill had already cleared the Senate unanimously in February.

The legislation compels social media platforms and websites to remove explicit content, including deepfakes, within 48 hours of a request from the victim.

First Lady Melania Trump also advocated for the bill. At the US Capitol, she described the online environment for teens as “toxic,” saying: “It’s heartbreaking to witness young teens, especially girls, grappling with the overwhelming challenges posed by malicious online content like deep fakes,” per CBS News

After the House vote, she said the bill made a “powerful statement that we stand united in protecting the dignity, privacy, and safety of our children.”

While major platforms like Meta, TikTok, and Snapchat support the law, digital rights groups warn that it could lead to the over-removal of content and the abuse of takedown tools without proper safeguards.

A global wake-up call

As deepfake technology outpaces legislation and law enforcement, victims are often forced to become their own investigators, spending time and money many don’t have just to be heard.

While the US has taken a meaningful step with the Take It Down Act, many countries remain without adequate laws or enforcement. Experts argue the fight against deepfakes must be global, multi-layered, and deeply human, combining tech tools, policy reform, and survivor advocacy.

For businesses, trust in what we see and hear can no longer be taken for granted. Instead, companies must adopt a “zero trust” mindset, verifying identities through multiple independent channels and training staff to be skeptical of even the most convincing media.

Employee education is now critical. Organizations must teach teams to treat video and audio content with the same suspicion once reserved for shady emails or suspicious text messages.

Organizational defense: 5 tips to counter deepfake threats

As deepfakes become increasingly convincing and accessible, organizations must take action now to strengthen their defenses. Here are five practical steps to get started:

  1. Use zero-trust communication protocols: Require multi-channel verification for financial, HR, or sensitive requests, especially those made via video or audio.
  2. Create a deepfake incident response plan: Develop a playbook for promptly identifying, escalating, and responding to suspected deepfake attacks.
  3. Train employees to identify synthetic media: Incorporate deepfake awareness into regular cybersecurity training by using real-world examples and role-playing exercises.
  4. Protect executive media assets: Limit access to high-quality video and audio of executives and watermark public content to reduce misuse.
  5. Tighten vendor and hiring checks: Add steps like live human interviews or biometric verification to prevent deepfake impersonators from slipping through.

Deepfake threats aren’t going away—but with the right policies, training, and tools, organizations can stay one step ahead.

Aminu Abdullahi Avatar

Subscribe to Cybersecurity Insider

Strengthen your organization’s IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices.

This field is required This field is required

Get the free Cybersecurity newsletter

Strengthen your organization’s IT security defenses with the latest news, solutions, and best practices. Delivered every Monday, Tuesday and Thursday

This field is required This field is required