SHARE
Facebook X Pinterest WhatsApp

OpenAI Signs $200M Defense Department Deal, Then Calms Fears About Weaponized AI

OpenAI for Government will consolidate ChatGPT Gov and other exciting resources. The US Department of Defence plans to use it to enhance admin work and cybersecurity.

Written By
thumbnail
Megan Crouse
Megan Crouse
Jun 18, 2025
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

This article was originally published on eWeek.

OpenAI accepted a $200 million contract with the US Department of Defense on June 17.

The deal includes consolidating all the AI company’s existing public sector products under one banner, OpenAI for Government, which encompasses ChatGPT Gov, as well as existing partnerships with the US National Labs, the Air Force Research Laboratory, NASA, the National Institutes of Health, and the Treasury Department.

The Defense Department will use OpenAI for Government to explore the use of AI in administration and security.

What is OpenAI for Government? 

OpenAI for Government features the company’s most advanced AI within the secure environments already established for ChatGPT Enterprise and ChatGPT Gov. Participants will receive hands-on support and advance information about what OpenAI is working on next. OpenAI said custom AI models for national security will be made available “on a limited basis.”

Initial use cases in the DOD include improving health care portals, searching program and acquisition data, and encouraging proactive cyber defense. The program is managed through the Chief Digital and Artificial Intelligence Office (CDAO).

“Through OpenAI for government, we’re going to help accelerate the U.S. government’s adoption of AI and deliver AI solutions that make a tangible difference for the American people,” OpenAI national security lead Katrina Mulligan wrote on LinkedIn

US government use must follow OpenAI’s ethics guidelines 

To calm fears of weaponized AI, OpenAI notes that “All use cases must be consistent with OpenAI’s usage policies and guidelines.” Those policies and guidelines include:

  • Complying with laws.
  • Not creating or expanding facial recognition databases without consent.
  • Social scoring.
  • Profiling people with the intent of determining whether they are likely to commit a crime.
  • Automating high-stakes decisions with significant impacts on people’s lives, such as immigration. 

In October 2024, OpenAI announced that it would not contribute its technology to the development of weapons. Shortly thereafter, it partnered with Anduril on the development of defensive drones

OpenAI is not the only AI company to make a defense deal. Meta’s Llama AI is available to US government agencies and contractors working on national security applications, such as Lockheed Martin. 

Recommended for you...

Congressional Budget Office Hit by Cyberattack During Shutdown
Ken Underhill
Nov 7, 2025
Cisco ISE Bug Exposes Networks to Remote Restart Attacks
Ken Underhill
Nov 7, 2025
Amazon WorkSpaces Linux Bug Lets Attackers Steal Credentials
Ken Underhill
Nov 7, 2025
Google Warns of AI-Driven Threat Escalation in 2026
Ken Underhill
Nov 7, 2025
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.