SHARE
Facebook X Pinterest WhatsApp

MalTerminal Malware Turns GPT-4 Into a Ransomware Factory

Researchers uncover MalTerminal, the first GPT-4-powered malware that creates ransomware and reverse shells on demand.

Written By
thumbnail Ken Underhill
Ken Underhill
Sep 22, 2025
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

A newly discovered malware dubbed MalTerminal is rewriting the playbook for cyberattacks by using OpenAI’s GPT-4 to generate ransomware and reverse shells in real time

Researchers from SentinelLABS unveiled the finding, calling it the first-known AI-powered malware found in the wild.

“The incorporation of LLMs into malware marks a qualitative shift in adversary tradecraft,” SentinelOne researchers stated in a report published on their website. “With the ability to generate malicious logic and commands at runtime, LLM-enabled malware introduces new challenges for defenders.” 

MalTerminal marks a shift in malware creation

The emergence of MalTerminal signals a turning point in malware development. 

Instead of embedding static payloads, the tool acts as a malware generator, asking its operator to choose between ransomware or a reverse shell before crafting fresh Python code through the GPT-4 API. This innovation means every execution can produce unique logic, making signature-based detection incredibly difficult.

From proof-of-concept to real-world threats

The discovery follows earlier research into PromptLock, an academic proof-of-concept ransomware discovered in August 2025 by ESET. While PromptLock used a local model to illustrate risk, MalTerminal shows adversaries are already experimenting with LLM-driven attacks in real-world contexts.

MalTerminal was found among a cluster of suspicious Python scripts and a compiled Windows binary, MalTerminal.exe.

Inside the MalTerminal sample

Analysis revealed hardcoded API keys and prompt structures inside the samples, enabling the malware to interact with OpenAI’s now-deprecated chat completions endpoint. This suggests the tool predates November 2023, making it the earliest known LLM-enabled malware sample.

When executed, MalTerminal prompts its operator to select an attack type. The binary then submits a request to GPT-4, dynamically retrieving ransomware or reverse shell code.

Because the malicious logic is fetched at runtime rather than stored in the binary, static analysis tools cannot easily flag it.

Investigators also uncovered related utilities, including TestMal2.py and testAPI.py, which mirrored the main malware’s functions, as well as FalconShield, an experimental scanner apparently written by the same author. Together, these artifacts indicate an ecosystem of tools designed to explore both offensive and defensive applications of LLMs.

Implications for cybersecurity teams

MalTerminal and PromptLock underscore how rapidly threat actors could potentially adapt large language models for malicious purposes.

By embedding AI into payloads, attackers can scale operations, evade static defenses, and innovate beyond traditional ransomware playbooks.

How organizations can respond

Although LLM-enabled malware remains in its early stages, defenders should prepare for a future where malicious code is created on demand. Security teams can take the following actions:

  • Monitor for unauthorized API usage or suspicious calls to large language model endpoints.
  • Apply network controls to detect outbound connections from unknown executables.
  • Revoke or rotate exposed API keys promptly, and maintain strict controls over key distribution.
  • Incorporate runtime behavioral analysis into antivirus and endpoint detection tools.
  • Train incident response teams on identifying artifacts such as hardcoded prompts or embedded keys.

For broader resilience, organizations should adopt zero-trust principles, enforce multi-factor authentication, and maintain tight governance over any AI integrations to limit the potential for abuse.

While these threats remain mostly experimental, they expose blind spots in current security models and challenge defenders to look for new indicators such as prompt content. 

For another glimpse into how AI can be weaponized, see how ChatGPT is used to bypass CAPTCHA challenges.

thumbnail Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

FBI Warns of Spoofed IC3 Websites Harvesting Victim Data
Ken Underhill
Sep 22, 2025
Russian Hackers Join Forces: Gamaredon + Turla Target Ukraine
Ken Underhill
Sep 22, 2025
ChatGPT Tricked Into Solving CAPTCHAs: Security Risks for AI and Enterprise Systems
Ken Underhill
Sep 19, 2025
SonicWall Urges Urgent Credential Reset After Backup File Exposure
Ken Underhill
Sep 19, 2025
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.