SHARE
Facebook X Pinterest WhatsApp

Most People Can’t Tell AI Phishing Emails from the Real Thing

AI-crafted phishing emails are fooling every generation. Learn why no one’s immune—and how to build stronger defenses.

Written By
thumbnail Ken Underhill
Ken Underhill
Oct 6, 2025
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

With artificial intelligence, cybercriminals are crafting phishing messages so convincing that even the most tech-savvy users are being duped. 

A new global survey reveals that most people — across generations, from Gen Z to baby boomers — can’t reliably distinguish between a phishing email written by AI and an authentic one.

“Because our personal and professional lives are so intertwined, and there’s widespread cross-contamination between personal and work devices, a successful phishing attack on your personal data and devices could compromise your work security, and vice versa,” said Ronnie Manning, chief brand advocate at Yubico. 

No one is immune: AI phishing scams are fooling every generation

The survey, conducted by Talker Research on behalf of Yubico, surveyed 18,000 employed adults from nine countries, including the United States, the United Kingdom, Australia, and Japan. 

It found that only 46% of respondents correctly identified a phishing email written by AI, while 54% either thought it was legitimate or were unsure.

Surprisingly, awareness did not differ significantly across age groups: Gen Z (45%), millennials (47%), Gen X (46%), and baby boomers (46%) all struggled equally to spot the scams.

This lack of generational difference underscores that AI-driven phishing isn’t just a problem for older, less tech-literate users — it’s a universal challenge. 

The study also found that less than one-third (30%) of respondents could correctly identify a legitimate, human-written email, highlighting how human error remains a significant factor in cybersecurity incidents.

AI makes phishing smarter

Traditional phishing relies on grammatical errors, odd phrasing, or suspicious links to deceive individuals or companies. But AI tools can now produce clean, professional, and personalized messages that appear authentic. 

These messages often mimic internal corporate emails or messages from trusted sources, using natural language models to remove the telltale signs of fraud.

The result: people are fooled more often and faster. According to Yubico’s findings, 44% of respondents admitted to interacting with a phishing message—by clicking a link or opening an attachment—within the last year. Even more concerning, 13% confessed they had done so within the last week.

Younger generations appear especially vulnerable. Sixty-two percent of Gen Z respondents said they had interacted with a phishing scam in the past year, compared to 51% of millennials, 33% of Gen Xers, and 23% of baby boomers. The most common attack methods included phishing via email (51%), text messages (27%), and social media (20%).

When asked why they fell for these scams, 34% of victims said the message appeared to come from a real, trusted source, while 25% admitted they were in a rush and didn’t think critically about the content. This highlights how both cognitive overload and trust exploitation remain central to phishing success.

How personal habits are putting work data at risk

The consequences of these lapses are significant. Respondents reported disclosing personal information such as email addresses (29%), full names (22%), and phone numbers (21%) to phishing actors. At work, similar data points — including professional emails and internal documents — were also compromised.

Half of respondents (50%) acknowledged being logged into work accounts on personal devices, often without their employer’s knowledge. Conversely, 40% admitted to checking personal email on work devices, 19% stored work documents on personal devices, and 17% accessed online banking from work computers.

This digital overlap creates a dangerous “cross-contamination” risk. Once an attacker breaches a personal account, they can pivot to corporate systems through synced passwords or shared devices. 

Despite these risks, 30% of people still lack multi-factor authentication (MFA) on personal accounts, and 40% of organizations fail to provide any cybersecurity training to employees.

Building smarter defenses

To defend against advanced AI phishing threats, organizations should adopt a layered approach that combines technology, policy, and user awareness. The following steps help reduce risk and strengthen overall resilience.

  • Adopt phishing-resistant authentication and consistent access control: Use FIDO2 security keys or device-bound passkeys instead of SMS codes, and standardize authentication across applications to reduce confusion and vulnerabilities.
  • Deploy AI-powered threat detection and secure email protocols: Use advanced email filters and behavioral analytics to detect AI-generated content, and apply DMARC, SPF, and DKIM to verify senders and prevent spoofing.
  • Educate continuously through adaptive, context-aware training: Replace annual courses with continuous micro-training and simulations that teach staff to spot evolving AI-driven phishing tactics.
  • Enforce strict device management and least-privilege principles: Restrict personal device use, segment sensitive systems, and grant access only as needed to limit breach impact.
  • Establish rapid reporting and multi-channel verification procedures: Foster a no-blame culture where employees promptly report anything suspicious and verify unusual requests through a second channel.
  • Integrate AI-aware incident response and deepfake defense measures: Update incident response (IR) plans to address AI impersonation and deepfakes, deploy anti-spoofing tools, and watermark official communications to ensure authenticity.

Together, these measures lay the groundwork for stronger cyber resilience, which is an essential foundation as AI-driven phishing grows more advanced and harder to detect.

AI is rewriting the rules of deception

As AI models become more sophisticated, the barriers to crafting convincing scams are vanishing. The technology that enables productivity and personalization also empowers threat actors to scale attacks at unprecedented levels.  

Both individuals and organizations must adopt proactive defense strategies — emphasizing education, authentication, and accountability — to counter increasingly human-like digital threats.

With AI-driven attacks blurring the line between real and fake, adopting a Zero Trust security model is no longer optional—it’s essential.

thumbnail Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

Combat Over Cyber: Pentagon Rewrites Training Priorities
Ken Underhill
Oct 6, 2025
Coordinated Exploitation Campaign Targets Grafana Vulnerability
Ken Underhill
Oct 3, 2025
Cybercriminals Impersonate FedEx, UPS, Other Brands in New Smishing Campaign
Ken Underhill
Oct 3, 2025
The UK Renews Clash with Apple Over Encrypted Data Access
Ken Underhill
Oct 3, 2025
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.