SHARE
Facebook X Pinterest WhatsApp

CamoLeak: GitHub Copilot Flaw Allowed Silent Data Theft

A GitHub Copilot Chat bug let attackers steal private code via prompt injection. Learn how CamoLeak worked and how to defend against AI risks.

Written By
thumbnail Ken Underhill
Ken Underhill
Oct 10, 2025
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

A critical vulnerability in GitHub Copilot Chat (CVSS 9.6) allowed attackers to siphon secrets and source code from private repositories and even steer Copilot’s replies with malicious instructions.

GitHub has already released a fix for this vulnerability.

In identifying the leak, security researcher Omer Mayraz stated the vulnerability “… allowed silent exfiltration of secrets and source code from private repos.”

When helpful becomes hazardous

Copilot Chat runs with the permissions of the user asking questions. 

That means a successful prompt injection can read from private repos, suggest tainted code, and leak data—without tripping traditional network egress controls. 

For software organizations adopting AI assistants for code review and PR triage, this is a direct intellectual property and cloud credential risk.

Inside the attack

The attack began by hiding an attacker-controlled prompt inside a pull-request description using GitHub’s “invisible comments” feature. 

Although this content is not visible in the normal UI, Copilot Chat still ingests the repository and PR context (including hidden metadata) when building responses. 

Because Copilot answers using the permissions of the querying user, any instructions embedded in that unseen context could influence the assistant’s behavior for any developer who opened or asked Copilot about the PR. 

In short, an attacker could influence the assistant’s outputs for other users without those users ever seeing the malicious text.

Data Access

Copilot is context-aware and operates with the privileges of the person invoking it. That means it can access private repository content when a permitted user requests help. 

The attack leveraged this privilege model: the injected prompt instructed Copilot to search for sensitive artifacts (for example, keys or vulnerability descriptions) in the victim’s accessible repositories and to render or encode those findings into output that could be exfiltrated.

Bypassing Camo proxy

Directly leaking data to an arbitrary external domain via a standard <img> tag or script is generally blocked by GitHub’s Content Security Policy (CSP). 

GitHub mitigates third-party image inclusion by routing external image requests through a Camo proxy: when Markdown or other repository content references an external image, GitHub rewrites the image URL into a camo.githubusercontent.com URL that includes a cryptographic signature. 

The Camo service only fetches upstream content when the signed URL validates as coming from GitHub, preventing attackers from freely crafting addresses that cause a browser to contact an attacker-controlled server.

Rather than attempting to directly post arbitrary external URLs, the attacker prepared a set of pre-signed proxy URLs that GitHub would accept. 

Each pre-signed URL pointed to a benign 1×1 transparent pixel hosted on infrastructure the attacker controlled. 

By assembling a dictionary of such signed proxy URLs — one per character or symbol the attacker anticipated needing — the adversary created building blocks that could be combined in sequence inside Copilot-generated output.

Encoding data

The injected prompt coerced Copilot to produce output that referenced the pre-signed proxy URLs in a particular order that encoded repository content. 

Conceptually, Copilot “drew” text by emitting a stream of image references; when the victim’s browser rendered the assistant output, the browser fetched the images via GitHub’s Camo proxy. 

Because each proxy fetch ultimately resolved to the attacker’s hosting, the pattern and order of requests effectively conveyed the stolen data back to the attacker — one small request per character — without leaving the data in normal server logs or visible UI elements.

Evasion technique

To avoid caching and to ensure each fetch represented fresh data, attackers appended ephemeral query parameters to the pre-signed URLs so that each request would be fetched rather than served from cache. 

The use of URL fragments or client-side reads rather than standard query parameters further reduced the chance that the exfiltrated content would appear in conventional server access logs, because fragments are handled in the client (browser) layer rather than sent to the origin in HTTP requests.

The technique combined several defensive gaps: Copilot’s ingestion of hidden repo context, the assistant’s operating privileges, trusted CDN/proxy rewriting (which made the requests look like normal GitHub activity), and the browser’s willingness to fetch many small, invisible resources. 

Together these produced a low-noise channel that could exfiltrate sensitive content without showing obvious malicious output to the developer. 

Reducing risk from AI-assisted attacks

To reduce the risk of similar AI-assisted vulnerabilities, organizations should adopt a layered approach that combines access control, identity protection, monitoring, and developer education.

  • Restrict access and permissions: Limit Copilot use to necessary teams and repos, apply least privilege, and disable unverified features like image or HTML rendering.
  • Enhance identity and secrets management: Enforce MFA, rotate secrets, and monitor for unauthorized access or key misuse.
  • Monitor, detect, and respond: Track Copilot activity, investigate anomalies, and include AI compromise scenarios in incident response plans.
  • Educate and harden developer workflows: Train developers to treat PR content as untrusted, review AI-generated code, and block unverified external rendering.

Together, these measures help organizations reduce exposure, strengthen oversight, and ensure that AI-assisted development remains both innovative and secure.

When AI becomes an attack surface

CamoLeak reflects a broader shift: as AI tools fuse with developer platforms, context becomes an attack surface. 

Controls once scoped to Markdown rendering or image proxies can become data-exfil channels when mediated by an agent. 

Enterprises should evaluate AI assistants like any privileged integration — map data flows, constrain capabilities (especially tool use and rendering), and prioritize rapid vendor-led mitigations. 

As AI platforms evolve, even seemingly narrow presentation features can become high-impact compromise paths if not locked down quickly.

AI continues to blur the line between trusted automation and potential threat. That’s because the same technologies enabling developer productivity are also driving a new wave of deception — most notably in the rise of AI-generated deepfakes.

thumbnail Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

AI Chatbots Exploited as Covert Gateways to Enterprise Systems
Ken Underhill
Oct 9, 2025
77% of Employees Share Company Secrets on ChatGPT, Report Warns
Ken Underhill
Oct 9, 2025
Phantom Taurus: China-Linked Hackers Target Global Governments
Ken Underhill
Oct 9, 2025
Met Police Arrest Teenagers in Kido Nursery Ransomware Attack
Ken Underhill
Oct 9, 2025
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.