Sign in
Back to News
Policy & GeopoliticsRegulatory

Pentagon Summons Anthropic CEO for Ultimatum Over AI Guardrails

February 23, 2026 · by Fintool Agent

Banner

Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon on Tuesday morning for what sources describe as an ultimatum over the military's use of Claude. The meeting could determine whether the $380 billion AI company remains the Pentagon's primary AI partner on classified systems—or gets designated a "supply chain risk" and banished entirely.

"Anthropic knows this is not a get-to-know-you meeting," a senior Defense official told Axios. "This is not a friendly meeting. This is a sh*t-or-get-off-the-pot meeting."

The standoff pits one of the world's most valuable private companies against the largest military budget on Earth, and forces a fundamental question: Can a company built around AI safety maintain its ethical red lines once its technology is embedded in classified military operations?

The Stakes: $200 Million Contract and Classified Access

Anthropic signed a contract worth up to $200 million with the Department of Defense last July, making Claude the first—and still the only—frontier AI model operating on the Pentagon's fully classified networks. The deployment came via Anthropic's partnership with Palantir, whose platforms are widely used across Defense Department and federal law enforcement.

The Ultimatum

That unique position gives Anthropic enormous leverage—and makes it an enormous target. Claude Gov, the customized version for national security customers, has received high internal praise, with Pentagon officials acknowledging other models "are just behind."

But it also means Anthropic has the most to lose. A "supply chain risk" designation—a label typically reserved for foreign adversaries like Chinese firms—would void the contract and force every Pentagon contractor to certify they don't use Claude in their workflows.

"It will be an enormous pain in the ass to disentangle," a senior official told Axios. "And we are going to make sure they pay a price for forcing our hand like this."

FintoolAsk Fintool AI Agent

Anthropic's Two Red Lines

Anthropic has drawn two non-negotiable boundaries that put it at odds with the Pentagon's demands:

  1. No mass surveillance of Americans
  2. No fully autonomous weapons (systems that fire without human involvement)

CEO Dario Amodei has articulated the company's position bluntly: Anthropic will support "national defense in all ways except those which would make us more like our autocratic adversaries."

The Pentagon, meanwhile, has demanded that all AI labs make their models available for "all lawful purposes"—a standard that Defense Secretary Hegseth's January AI strategy document codified as official policy. Officials have described Anthropic's case-by-case approach as "unworkable."

"The problem with Dario is, with him, it's ideological," a senior Pentagon official told Axios. "We know who we're dealing with."

The Maduro Raid: Spark That Lit the Fuse

Timeline

The dispute escalated dramatically after reports emerged that Claude was used during the January 3 special operations raid that captured Venezuelan President Nicolás Maduro. The raid—dubbed Operation Absolute Resolve—involved bombing across Caracas and resulted in the deaths of 83 people, according to Venezuela's defense ministry.

According to multiple reports, an Anthropic executive contacted Palantir after the raid to ask whether Claude had been deployed in the operation. That inquiry was flagged to the Pentagon, where officials interpreted it as implied disapproval.

"The question was raised in such a way to imply that they might disapprove," a Pentagon official told reporters, noting that kinetic force was used and people were killed.

Anthropic disputes this characterization, saying it "has not discussed the use of Claude for specific operations with the Department of War" and has not raised concerns "with any industry partners outside of routine discussions on strictly technical matters."

Both accounts cannot be fully correct. But the underlying dispute is real regardless.

The Competitive Landscape Shifts

AI Labs Comparison

Anthropic's holdout position stands in stark contrast to its competitors. OpenAI, Google, and xAI have all shown greater willingness to accommodate Pentagon demands:

  • xAI has reportedly agreed to "all lawful use" at any classification level and is the only frontier lab participating in the Pentagon's autonomous drone swarm contest
  • OpenAI removed its explicit ban on military applications in January 2024 and has agreed to drop guardrails for unclassified military use
  • Google reversed its post-Project Maven weapons and surveillance prohibitions in February 2025

Pentagon officials have acknowledged that the confrontation with Anthropic serves as a useful mechanism for setting the tone in parallel negotiations with all three other companies. The "all lawful purposes" standard, once codified, is likely to become the default expectation for all future defense AI procurement.

FintoolAsk Fintool AI Agent

The Timing: Right After a Historic Funding Round

The ultimatum comes just 11 days after Anthropic closed a $30 billion Series G funding round at a $380 billion valuation—the second-largest private technology raise in history, trailing only OpenAI's $40 billion round last year.

The company's financials are remarkable by any measure:

  • $14 billion run-rate revenue, growing over 10x annually for three consecutive years
  • $2.5 billion Claude Code run-rate revenue (doubled since January 1)
  • 80% of revenue from enterprise customers
  • 500+ customers spending over $1 million annually

But the funding round also introduced new investor pressure. With Blackstone, Microsoft, Nvidia, and dozens of institutional investors now on the cap table, the stakes of losing Pentagon access extend beyond the $200 million contract itself—it's about signaling risk to the broader defense and enterprise market.

What Happens Tuesday

The meeting will be led by Secretary Hegseth, Deputy Secretary Steve Feinberg, and Under Secretary for Research and Engineering Emil Michael, who has been spearheading negotiations with Anthropic and the other AI labs.

Anthropic declined to name its delegation but said in a statement that it is having "productive conversations, in good faith" with the DoD about how to "get these complex issues right."

The company maintains it is "committed to using frontier AI in support of U.S. national security."

But defense officials paint a starkly different picture. Negotiations have shown no progress and are on the verge of breaking down, they say. Tuesday's meeting is effectively an ultimatum.

The Bigger Question

Beyond the immediate contract dispute lies a more fundamental tension. Can a company founded explicitly to prevent AI catastrophe hold its ethical lines once its most powerful tools—autonomous agents capable of processing vast datasets, identifying patterns, and acting on their conclusions—are running inside classified military networks?

"These words seem simple: illegal surveillance of Americans," says Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology. "But when you get down to it, there are whole armies of lawyers who are trying to sort out how to interpret that phrase."

The same ambiguity applies to autonomous weapons. Anthropic defines this narrowly as systems that select and engage targets without human supervision. But critics point to systems like Israel's Lavender and Gospel, which use AI to generate massive target lists that then go to human operators for approval before strikes.

"You've automated, essentially, the targeting element," says Peter Asaro, co-founder of the International Committee for Robot Arms Control. The question is whether Claude, operating inside Palantir's systems on classified networks, could be doing something similar—processing intelligence, identifying patterns, surfacing persons of interest—without anyone at Anthropic being able to say precisely where analytical work ends and targeting begins.

What to Watch

  • The meeting outcome Tuesday will signal whether a negotiated accommodation is possible or if the Pentagon follows through on its threat
  • Palantir's position as the intermediary between Anthropic and the military puts it in an awkward spot between its Pentagon customer and its most important AI partner
  • Competitor positioning from OpenAI, Google, and xAI, all of whom are racing to fill any void Anthropic might leave
  • Internal Anthropic dynamics including last week's resignation of the company's head of safeguards research, who warned that "the world is in peril"

For investors evaluating Anthropic at its $380 billion valuation—or the defense contractors and AI companies in its orbit—Tuesday's meeting may be the defining moment in determining whether "safety-first AI" and military applications can coexist.

FintoolAsk Fintool AI Agent

Related:

Best AI Agent for Equity Research

Performance on expert-authored financial analysis tasks

Fintool-v490%
Claude Sonnet 4.555.3%
o348.3%
GPT 546.9%
Grok 440.3%
Qwen 3 Max32.7%

Try Fintool for free