The Pentagon has slapped a formal supply-chain risk designation on artificial intelligence lab Anthropic, limiting its use of a technology that a source said was being used for military operations in Iran. The "supply-chain risk" label, confirmed in a statement by Anthropic, is effective immediately and bars government contractors from using Anthropic's technology in their work for the U.S. military. However, companies can still use Anthropic's Claude in other projects unrelated to the Pentagon, as the restrictions only apply to the usage of Anthropic AI in Pentagon contracts.
CEO Dario Amodei wrote in the statement that the designation has "a narrow scope" and that the restrictions only apply to the usage of Anthropic AI in Pentagon contracts.
"It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts."
The risk designation follows a months-long dispute over the company's insistence on safeguards that the Defense Department, which the Trump administration calls the Department of War, said went too far.
In his statement, Amodei reiterated that the company would challenge the designation in court.
In recent days, Anthropic and the Pentagon have discussed possible plans for the Pentagon to stop using Claude, Amodei said in the Thursday statement. The two sides have talked about how Anthropic might still work with the military without dismantling its safeguards, he added.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



