Uncategorized
February 26, 2026

How Anthropic will lose more than $200 million if it refuses Pentagon demands

The clock is ticking for Anthropic, one of the world’s largest artificial intelligence companies, after the Department of Defense threatened to blacklist the company from working with the military. 

Defense Secretary Pete Hegseth said the company has until 5 p.m. on Friday to grant the Pentagon full, unfettered access to its AI model. If not, Hegesth said the department would invoke the Defense Production Act, allowing the military to use its model and labeling Anthropic as a supply chain risk, according to The New York Times. The move could put the company’s military contracts, worth hundreds of millions of dollars, at risk.

The disagreement comes after Anthropic asked for assurances that the Pentagon wouldn’t use the company’s AI to spy on Americans or for autonomous weapons. However, the Trump administration is demanding to use the technology without restrictions, Al Jazeera reports

Hegseth has envisioned a military AI system that operates “without ideological constraints” that may limit lawful military operations. He said he would not allow the military’s AI to be “woke.”

What is the Pentagon asking for?

Hegseth said that Anthropic needs to allow the Pentagon full access to its AI for all “lawful” purposes, including AI warfare and surveillance. Defense officials told NPR that the military would keep using the company’s AI tools regardless of how it felt about it. 

The Defense Production Act has wide-ranging implications but is typically used in manufacturing contexts, The New York Times reports. The atypical move would force Anthropic to make its product available for free. 

The Pentagon previously awarded Anthropic a military contract of up to $200 million in 2025. The company was the first cleared for classified use, beating Google’s Gemini and OpenAI’s ChatGPT. Military officials said Anthropic’s AI was the most advanced and secure model for sensitive applications. 

Anthropic CEO Dario Amodei has previously raised ethical concerns about unregulated government use of AI, especially the dangers of fully autonomous drones armed with deadly weapons. 

“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Amodei wrote in January.

On Tuesday, Amodei appeared on The Times’ “Interesting Times” podcast. He raised concerns about AI “drone swarms,” which could attack people with no human input. 

“The constitutional protections in our military structures depend on the idea that there are humans who would disobey illegal orders with fully autonomous weapons,” Amodei said.

The military has pushed back on Anthropic’s concerns, saying they require tools without built-in limitations. Pentagon officials told Al Jazeera that the military has issued only lawful orders and is legally responsible for the tools. 

Does the government use AI to spy on Americans?

Besides autonomous weapons, Anthropic and other AI companies are afraid the government would use their products to spy on Americans. The company’s biggest fear regarding surveillance is that if an advanced enough AI is given enough data, it could end private life in the country. 

Amodei said this could “make a mockery of the Fourth Amendment.” The Fourth Amendment protects Americans against unreasonable searches and seizures as well as warrantless searches.

Currently, there are no federal laws or regulations targeting AI mass surveillance. That worries Anthropic, since it doesn’t want its product to become the infrastructure of a potential American surveillance state. 

Earlier this month, Mrinank Sharma, an Anthropic AI safety researcher, left the company over concerns about AI use. In a statement following his resignation, Sharma said that something must be done now, since the crisis is already underway. 

“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote. “Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Palantir, a data analytics company specializing in government agencies and the military, is developing an AI-based program that can track and pinpoint potential deportation targets, according to the Electronic Frontier Foundation. The company called the tool the Enhanced Leads Identification and Targeting for Enforcement, or ELITE. 

Straight Arrow News has previously reported that Immigration and Customs Enforcement has potentially used surveillance tools to track protesters.

How do Anthropic’s guidelines differ from other AI companies?

Anthropic is the last major AI company to refuse to grant the military unrestricted access. 

OpenAI, the largest AI company, quietly deleted references to military and warfare from its list of prohibited uses in early 2024. When asked why, company officials said there were “national security use cases that align” with the company’s mission. 

Google followed OpenAI’s lead about a year later. The reversal was a major shift for the company, since it explicitly pledged not to use AI for weapons or surveillance. The pledge followed employees’ concerns about Project Maven in 2018, and Google eventually left the project. The ongoing project is an attempt by the military to accelerate its adoption of AI across military intelligence workflows. 

Unlike Google and OpenAI, Elon Musk’s xAI never had any safety policies regarding military or surveillance use. The company just reached a deal with the military to allow the Pentagon’s “all lawful use” standard. xAI was the second AI system approved for classified military networks.

TAGS: