Early last week, seemingly out of the blue, Secretary of War Pete Hegseth issued an ultimatum to the artificial intelligence company Anthropic: either grant the Pentagon unfettered access to its Claude AI model, or lose an estimated $200 million contract it inked with the War Department last year.
Hegseth didn’t stop there; he also warned that if Anthropic failed to agree, the company would be listed as a “supply-chain risk” — a designation usually applied to foreign companies from hostile countries. That designation effectively prevents other government contractors from doing business with said company without risking their own federal contracts.
Hegseth gave Anthropic until 5:01 PM last Friday to decide. Anthropic’s answer was a clear “No,” and President Donald Trump immediately ordered all federal agencies to phase out the use of Anthropic’s AI technology.
Anthropic founder and CEO Dario Amodei formerly worked at Sam Altman’s OpenAI but left in 2020 to launch his own AI company. Importantly, part of Amodei’s leaving OpenAI was his vision for the burgeoning technology — or maybe, more accurately, his ethical concerns surrounding the rules for how the new tech development would proceed.
It is also important to note here that Anthropic is not the only AI company the Pentagon uses; it also has contracts with OpenAI, Google, and xAI.
So, what was Anthropic’s problem? According to Amodei, he was concerned that the Pentagon might use Anthropic’s technology to run mass surveillance against Americans or as a weapon without direct human control. Specifically, Anthropic raised a concern over how the Pentagon may have used its AI tech for the highly successful raid to arrest and remove Nicolás Maduro from his presidential compound in Venezuela.
A Pentagon official said the issue with Anthropic has “nothing to do with mass surveillance and autonomous weapons being used.” The official noted that all other AI contracted companies “are working collaboratively with the Pentagon in good faith to ensure their models can be used for all lawful purposes.” Furthermore, those companies also have a stipulation that their products not be used for mass surveillance or the development of non-human-directed AI weapons.
Given this reality, it would appear that Anthropic was the first to get sideways with the Pentagon, rather than the other way around.
It doesn’t seem that the Pentagon’s AI usage agreement was compromised during the Venezuelan raid, so what was Anthropic’s real concern?
The answer likely lies with Amodei and his apparent desire to see regulatory rules governing AI development, a process in which he’s assuredly eager to have a significant hand. Within the AI industry, Amodei has a reputation as a fearmonger regarding the potential threats posed by unregulated AI development.
Furthermore, Amodei is no fan of Trump. He promoted Kamala Harris and the Democrats in the 2024 election, calling Trump a “feudal warlord” who “represents a serious and legitimate threat to the rule of law.” He urged voters to put Democrats into Congress who would investigate the “corrupt things Trump has done.”
In this most recent incident with the Trump administration, the crux of the issue appears to be that Anthropic, or rather Amodei, is pushing to create a progressive role in dictating how the Pentagon develops and uses AI technology. Of course, it is not within a private company’s purview to dictate how the U.S. military uses its products. National security is not under a private company’s authority.
Some have been critical of how Hegseth has handled the situation, suggesting that he has given confusing, conflicting, and overreaching directives, such as designating Anthropic as a “supply-chain risk” while also invoking the Cold War-era Defense Production Act (DPA) to force Anthropic to continue working with the War Department.
For example, Dean Ball, a former Trump administration AI adviser, said, “You’re telling everyone else who supplies to the DOD you cannot use Anthropic’s models, while also saying that the DOD must use Anthropic’s models. … It doesn’t make any sense.”
Similarly, Katie Sweeten, a former Justice Department official who had worked at the Pentagon, mused, “I don’t know how you can both use the DPA to take over this product and also at the same time say this product is a massive national security risk.”
The obvious answer is that Hegseth does not actually view Anthropic’s product as a national security risk; rather, he views the company’s leadership as the security risk.
Concerns about developing rules to regulate AI development are bipartisan. Apparently, Amodei wants to be the arbiter of those rules. Hegseth isn’t willing to let that fly.















