A growing rift between artificial intelligence (AI) firm Anthropic and the US Department of Defence (DoD) has spilt into public view, with the company making it clear it will not dilute its AI safety policies, even if it means losing a major government contract.
Anthropic CEO Dario Amodei said on February 26 that the company would rather walk away from Pentagon business than allow its AI systems to be used in ways that could "undermine, rather than defend, democratic values."
His remarks followed a tense meeting with US Defence Secretary Pete Hegseth, during which the Pentagon reportedly demanded that Anthropic agree to "any lawful use" of its AI tools, including its flagship model, Claude.
The dispute centres on how Claude could potentially be deployed. Anthropic has drawn firm red lines against two applications in particular: mass domestic surveillance and fully autonomous weapons capable of lethal action without human oversight. The company argues that crossing those boundaries would violate its core principles.
According to sources familiar with the talks, Hegseth gave Anthropic a Friday deadline: grant the Pentagon full, unrestricted access to Claude or risk losing its $200 million government contract.
Anthropic declined. "These threats do not change our position," Amodei said. "We cannot in good conscience accede to their request."
While Pentagon officials have publicly stated they have "no interest" in using AI for domestic surveillance or autonomous weapons, Hegseth's own statements reportedly called for broad access to Anthropic's models. They accused the company of putting "Silicon Valley ideology above American lives."
Amid the standoff, Anthropic sought to reassure its commercial customers. The company clarified that any designation affecting its government contract would apply only to specific Department of War-related work under federal statute 10 USC 3252.
Commercial API users, private clients, and contractors serving non-defence customers would remain unaffected.
The dispute has also sparked rare solidarity among competitors. OpenAI CEO Sam Altman reportedly told staff he shared Anthropic's "red lines" and wanted to help ease tensions.
Roughly 70 OpenAI employees and 175 Google staffers signed an open letter backing Anthropic's stance, warning that the Pentagon appeared to be pressuring AI companies individually in hopes one might concede.
However, just hours after the 5:01 pm deadline expired, Altman announced that OpenAI had reached its own agreement with the Pentagon to deploy its AI models on classified networks, with safety guardrails in place.
"The DoD agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman wrote on X.
Meanwhile, the Pentagon is reportedly exploring the use of Elon Musk's Grok AI for classified systems, though some current and former officials have described it as less capable than Claude.
The fallout could have significant implications for US intelligence operations. Anthropic's Claude became the first frontier AI model deployed on US government-classified networks in June 2024 and is currently used by agencies such as the CIA and the NSA for intelligence analysis.
Removing it from those systems could disrupt ongoing projects.
The clash underscores a broader debate across the tech and defence sectors: how to balance national security needs with ethical limits on emerging technologies. For now, Anthropic appears determined to hold its ground, even at considerable cost.