SHARE
Facebook X Pinterest WhatsApp

65% of Leading AI Companies Found Leaking Secrets on GitHub

Wiz Security found 65% of top AI companies leaked secrets on GitHub, exposing sensitive data and highlighting critical security gaps.

Written By
thumbnail
Ken Underhill
Ken Underhill
Nov 11, 2025
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

AI companies may be racing to innovate, but many are leaving critical security gaps behind. 

According to new research from Wiz Security, 65% of leading AI firms had verified secret leaks on GitHub, exposing sensitive data like API keys, tokens, and credentials often hidden in deleted forks and developer repositories. 

These leaks risk revealing private models, organizational structures, and even training data — a growing concern for companies driving the next generation of AI.

“Speed and security have to move together,” said the Wiz research team. 

Secrets Beneath the Surface

The study examined private firms from the Forbes AI 50, a benchmark list of leading AI innovators, including Anthropic, Glean, and Crusoe. 

Wiz’s analysis showed that almost two-thirds of these companies had confirmed leaks.

For AI startups balancing growth with governance, the findings reveal a critical blind spot: secrets hidden “below the surface” in historical commits, forks, and developer gists that traditional scanners miss.

The Depth, Perimeter, and Coverage Approach

Wiz’s researchers used a three-dimensional framework for identifying the attack surface.

  • Depth: The team analyzed full commit histories, deleted forks, and workflow logs to uncover secrets that standard GitHub searches overlook.
  • Perimeter: They traced leaks beyond official company repositories, identifying organization members who inadvertently committed corporate secrets to their personal projects or public repos.
  • Coverage: They looked for AI-specific secret types — including tokens for services like Weights & Biases, Hugging Face, and ElevenLabs — that traditional scanners often miss.

One alarming discovery involved an AI company whose deleted fork contained a Hugging Face token exposing access to more than 1,000 private models. 

Another case uncovered LangChain API keys with high-level organizational permissions, and ElevenLabs was found leaking enterprise-tier API keys in plain text.

Closing the Gaps in AI Security

To counter the growing risk of secret leaks in AI development, organizations must adopt a proactive and layered defense strategy. 

Security isn’t just about plugging gaps — it’s about building sustainable habits across code, people, and processes.

  • Mandate public VCS secret scanning: All public GitHub repositories should enable automated secret scanning. This should include forks and deleted branches where credentials may persist.
  • Establish clear disclosure channels: Many AI companies lacked a functioning vulnerability disclosure process. Every organization should define, document, and publicize its reporting pathway.
  • Develop a proprietary secret detection policy: Companies should identify and monitor for AI-specific tokens unique to their platform (e.g., Hugging Face or Weights & Biases).
  • Harden developer hygiene: Treat developers’ personal repositories and GitHub accounts as part of the attack surface. Encourage pseudonymous handles, mandatory MFA, and separation of personal and corporate code activity.
  • Evolve scanning practices: As new AI tools emerge, scanners must adapt to detect novel token formats and file types that hold embedded secrets.

By adopting these mitigations, AI organizations can strengthen their cyber resilience.

The Hidden Cost of AI Speed

The Wiz research underscores a growing theme in AI security: innovation outpacing protection. 

As companies race to train larger models and deploy them more quickly, the infrastructure supporting them is becoming a rich target for attackers. 

Leaked credentials don’t just expose models — they open the door to supply chain compromise, model tampering, and data exfiltration.

Ultimately, AI progress cannot come at the cost of security.  

This growing tension between innovation and protection highlights why adopting zero-trust principles — which assume no user, device, or connection is inherently safe — is becoming essential for securing the rapidly expanding AI ecosystem.

Recommended for you...

Critical Zoom Vulnerability Exposes Windows Users to Attacks
Ken Underhill
Nov 11, 2025
Monsta FTP Remote Code Execution Vulnerability (CVE-2025-34299) 
Ken Underhill
Nov 11, 2025
Iranian Cyber Espionage: Proofpoint Uncovers UNK_SmudgedSerpent
Ken Underhill
Nov 10, 2025
18,000 Files Stolen: Intel Faces Insider Threat Challenge
Ken Underhill
Nov 10, 2025
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.