Anthropic chief executive Dario Amodei

Anthropic Stands Firm on AI Limits in Pentagon Dispute

Anthropic has said it will not relax safeguards on how its artificial intelligence tools are used, even if that means losing US defense contracts, as tensions escalate with the Pentagon.

The company’s chief executive, Dario Amodei, said on Thursday that Anthropic would rather walk away from working with the US Department of Defense than agree to conditions that could allow its technology to be used in ways that “undermine democratic values.”

The standoff follows a meeting earlier this week between Anthropic executives and US Defense Secretary Pete Hegseth, during which the Pentagon pushed for contract language permitting “any lawful use” of the company’s AI systems. According to Amodei, the talks ended with warnings that Anthropic could be removed from the Defense Department’s supplier list.

“These threats do not change our position,” Amodei said, adding that the company could not accept terms that might enable mass domestic surveillance or the deployment of fully autonomous weapons.

At the heart of the dispute is concern that Anthropic’s AI tools, including its Claude model, could be used to aggregate large volumes of personal data for widespread surveillance or to operate weapons systems without meaningful human oversight. Amodei said such uses were not part of Anthropic’s existing agreements with the Pentagon and should not be introduced now.

The Defense Department, which has also been referred to as the Department of War under an executive order signed last year by President Donald Trump, has not commented publicly on the negotiations. However, senior defense officials have previously suggested the government could invoke the Defense Production Act, a law that allows Washington to compel companies to prioritise national security needs.

Pentagon officials have also raised the possibility of designating Anthropic a “supply chain risk,” a move that would bar the company from certain government work. Former Defense Department officials have questioned whether there is a strong legal basis for such steps.

In a blog post, Amodei acknowledged the importance of AI in national security but drew a clear distinction between acceptable and unacceptable uses. He said Anthropic supports lawful foreign intelligence and counterintelligence operations but believes mass surveillance of citizens conflicts with democratic principles.

On autonomous weapons, Amodei argued that current AI systems are not reliable enough to operate independently in combat situations, warning that inadequate safeguards could endanger both civilians and military personnel. He said Anthropic had offered to collaborate with the Pentagon on research to improve oversight and reliability, but that proposal had not been taken up.

The dispute highlights growing friction between AI developers and governments worldwide as powerful new technologies are increasingly drawn into military and security planning.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *