Breaking News
The U.S. Department of Defense and artificial intelligence company Anthropic are locked in a dispute over how the military should be allowed to deploy advanced AI systems, according to multiple people familiar with the discussions.
At the heart of the disagreement are safeguards embedded in Anthropic’s technology that restrict its use in autonomous weapons targeting and domestic surveillance. Pentagon officials have pushed to loosen or bypass those limits, arguing that commercial AI tools used by the military should be governed solely by U.S. law rather than corporate usage policies, the sources said.
The talks, tied to a contract valued at up to $200 million, have stalled after weeks of negotiations. People briefed on the matter said the disagreement reflects broader tensions between Silicon Valley and the U.S. government over the role of private companies in shaping how artificial intelligence is deployed on the battlefield.
Test case for Silicon Valley’s influence
The discussions are being closely watched as an early test of whether AI developers can influence how military and intelligence agencies use increasingly powerful technologies. After years of strained relations, major technology firms have moved closer to Washington, securing defense contracts while seeking to preserve ethical boundaries around AI deployment.
Pentagon officials have pointed to a January 9 internal memo outlining the department’s AI strategy, which emphasizes operational flexibility and rapid adoption of commercial technologies. Under that framework, defense officials contend they should retain authority to determine AI use cases, provided they comply with U.S. legal standards.
A spokesperson for the department — recently renamed the Department of War by the Trump administration — did not respond to requests for comment.
Ethical boundaries and national security
Anthropic said in a statement that its AI systems are already widely used by the U.S. government for national security purposes and that discussions with the department remain ongoing. The company added that it aims to continue supporting defense missions while maintaining responsible use standards.
Anthropic was among several AI firms awarded Pentagon contracts last year, alongside Alphabet’s Google, OpenAI and Elon Musk’s xAI. The company has positioned itself as both a national security partner and an advocate for strict guardrails on AI deployment — a stance that has occasionally put it at odds with the current administration.
In a recent blog post, Anthropic Chief Executive Dario Amodei wrote that artificial intelligence should strengthen national defense “in all ways except those which would make us more like our autocratic adversaries.” His comments come amid growing concern within parts of the tech industry over government use of AI in surveillance and violent enforcement actions.
The standoff underscores unresolved questions about who ultimately controls the ethical boundaries of military AI as the technology becomes more deeply embedded in defense operations.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



