SHARE
Facebook X Pinterest WhatsApp

77% of Employees Share Company Secrets on ChatGPT, Report Warns

New report reveals 77% of employees share sensitive company data through ChatGPT and AI tools, creating major security and compliance risks.

Written By
thumbnail Ken Underhill
Ken Underhill
Oct 9, 2025
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

Corporate data security is facing an unprecedented challenge as new research reveals that most employees are inadvertently leaking sensitive company information through generative AI tools like ChatGPT. 

According to LayerX Security’s Enterprise AI and SaaS Data Security Report 2025, employees regularly paste sensitive corporate data into AI chatbots—often from personal, unmanaged accounts that bypass enterprise controls.  77% of online LLM access is to ChatGPT. Approximately 18% of enterprise employees paste data into GenAI tools and more than 50% of those paste events include corporate information.  

In a message to The Register, Or Eshed, CEO of LayerX Security said “… that having enterprise data leak via AI tools can raise geopolitical issues, regulatory and compliance concerns, and lead to corporate data being inappropriately used for training if exposed through personal AI tool usage.”

AI-powered productivity, compliance-driven risk

The findings underscore a growing identity and data management crisis within enterprise environments. 

LayerX’s telemetry data, collected through enterprise browser monitoring across global organizations, shows that 45% of enterprise users actively engage with generative AI platforms—43% of them using ChatGPT alone. 

The report warns that generative AI tools have become the leading channel for corporate-to-personal data exfiltration, responsible for 32% of all unauthorized data movement. 

Nearly 40% of uploaded files contain personally identifiable information (PII) or payment card industry (PCI) data, while 22% of pasted text includes sensitive regulatory information.

For organizations bound by regulations like GDPR, HIPAA, and SOX, such exposure creates a ticking compliance time bomb.

The hidden risks of shadow AI

LayerX researchers found that most risky interactions occur through unmanaged browsers and personal accounts, which fall completely outside identity management systems

In fact, 71.6% of generative AI access happens via non-corporate accounts—a trend mirrored in other major SaaS platforms like Salesforce (77%), Microsoft Online (68%), and Zoom (64%).

The real danger lies in the method’s simplicity: copy/paste behavior. 

Among users who paste to GenAI tools, the average is 6.8 pastes/day. Of those, more than half (3.8 pastes) include sensitive corporate data. 

This manual, invisible process bypasses traditional data loss prevention (DLP) systems, firewalls, and access controls entirely.

Threat actors and data aggregators can exploit this leakage in multiple ways—from training large language models on exposed data to targeting specific industries through leaked code, credentials, or proprietary workflows. 

Building a multi-layered defense for the AI era

To defend against AI-driven attacks and data exposure, organizations should adopt a multilayered approach that secures both user interactions and backend infrastructure. Key mitigations include:

  • Enforce centralized access controls, including SSO, least privilege, and device-based policies, for all AI tools.
  • Monitor browsers and endpoints to track data flows and block unauthorized AI use or data leaks.
  • Harden AI systems and APIs through segmentation, validation, and prompt filtering to stop malicious inputs.
  • Monitor runtime and containers for anomalies, unauthorized activity, or persistence tactics.
  • Implement AI governance and posture management (AI-SPM) to inventory assets, assess risks, and enforce policies.
  • Train employees on secure AI use, highlighting data-sharing risks and prompt manipulation awareness.

Security teams should also conduct forensic analysis of browser logs and network traffic to identify potential AI-driven data leaks and ensure IR plans are tested. 

The growing gap between AI innovation and security control

The LayerX report paints a stark picture of how AI adoption has outpaced enterprise governance. 

According to the National Bureau of Economic Research, 23% of workers now use generative AI on the job.

Yet, despite its ubiquity, most organizations lack a coherent security framework to manage AI-related risks.

This trend marks a broader shift in enterprise risk management: where once data loss stemmed from phishing or misconfigured storage, it now occurs through everyday AI-assisted work. 

As AI continues to embed itself in corporate workflows, security teams must balance innovation with discipline, ensuring that enthusiasm for productivity doesn’t undermine confidentiality.

Beyond data exposure, the rise of deepfake technology introduces a new class of AI-driven threats that blur truth, erode trust, and require advanced detection capabilities.

thumbnail Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

FBI Seizes BreachForums Portal Used in Salesforce Extortion Campaign
Ken Underhill
Oct 10, 2025
Critical WordPress Plugin Vulnerability Allows Admin Account Takeover  
Ken Underhill
Oct 10, 2025
175 npm Packages Abused in Beamglea Phishing Operation
Ken Underhill
Oct 10, 2025
CamoLeak: GitHub Copilot Flaw Allowed Silent Data Theft
Ken Underhill
Oct 10, 2025
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.