Trump’s conduct makes a hard question even harder
The Department of Defense has threatened to effectively destroy (or federalize) Anthropic (the manufacturer of Claude, a sophisticated AI model) if Anthropic does not cave into DOD’s demands and drop its AI red lines by Friday.
Anthropic’s CEO Dario Amodei has long been cautious about new AI uses. As an example of living its principles, when Anthropic first contracted to deploy Claude in the US government’s classified systems (to date, it is the only AI model deployed on the classified networks), Anthropic imposed two conditions – that Claude not be used for mass domestic surveillance of Americans and that it not be used to create fully autonomous weapons that operated without human control.
DOD Secretary Pete Hegseth is demanding that Anthropic drop both conditions by Friday and allow DOD to use Claude in any “lawful” manner that DOD chooses, presumably also to include the two currently prohibited activities. If Anthropic does not agree, then DOD will either declare Anthropic a “supply chain risk,” which would effectively bar ALL DOD contractors from doing business with Anthropic, OR invoke the Defense Production Act to compel Anthropic to provide DOD with an unlimited AI model bereft of limitations.
The controversy deserves far more public attention than it has gotten. Herewith a few quick, preliminary thoughts:
1. Anthropic, like any other private business (that isn’t a public service like a train system), should be free to contract with whoever it wishes and free, as well, on principle to decline to do business with anyone whom it considers inappropriate. But principles have costs – if a profitable client won’t agree to your terms, you have to make a choice.
2. DOD was free not to enter into the contract that it did with Anthropic. Presumably, it chose to do so because Anthropic was the best (maybe only) choice then available. If DOD no longer likes the terms of the contract, it is free to modify, cancel, or withdraw as the contract permits.
3. At the same time, it seems very inapt for a DOD supplier to be able to put conditions on how DOD uses a tool that it provides. We would not, for example, allow a tank manufacturer to forbid DOD from selling the tanks to Turkey (because, say, of an affinity for the Kurds), nor would we allow a satellite provider to forbid DOD from using the satellite to assist Israel (because of the Gaza war). Our military has to have the freedom of action to act in the best interests of the country.
4. Part of what is happening here is that Anthropic is under some public relations pressure because Claude was, reportedly, used in the Maduro assault. But that reputational risk is one every defense supplier runs in order to do business with DOD. If you want to sell to the US government, you have to accept that the government will use your product. And that means, as a general matter, that systems provided can be used in any “lawful” manner.
5. The problem arises here for two reasons: First, the law simply never keeps up with technology. [We still haven’t solved the backdoor/encryption problem after 10+ years.] I have not done the research, but I have little doubt that many domestic surveillance uses that DOD might contemplate can be legally justified, and I am equally certain that it is likely that no domestic law explicitly prohibits the complete development of autonomous weapons systems [international law is different, but non-binding].
6. Second, the larger problem is that Trump has exhausted all presumptions of regularity and lawfulness. Even IF I am wrong and one could identify legal limits on domestic surveillance and autonomous weapons use in Federal law, Anthropic (and the general public) rightly have doubts that Secretary Hegseth would abide by those limits. Indeed, the entire project of Trump/Hegseth over the last year has been to erode the legal limits that bind DOD.
7. Of even greater import, the threat from Hegseth is wildly disproportionate to the problem presented. In essence, what Hegseth is threatening is “I will either destroy you or nationalize you if you don’t give me unlimited permission to use your product as I wish.” Seizing the factors of production is a communist sort of thing, isn’t it? And yet it has almost become the norm for the Trump administration to threaten a “nuclear option” response if it doesn’t get its way.
8. I have little doubt that neither of Hegseth’s proposed courses of action would survive a court challenge – as with so many of Trump’s actions, there simply is no factual predicate to justify the steps threatened. Anthropic would, eventually, win in court.
9. But that sort of court victory would be pyrrhic. During the pendency of the suit, Anthropic (one of the foundational AI creators in the US) would be under a cloud. The economic consequences of fighting would be severe. I would not want to be Amodei today, trying to figure out the next steps.
10. What’s the right answer? That’s pretty obvious – if Claude is not available under terms that DOD wishes, then DOD should find another provider who will provide the product as specified. Reports are that Elon Musk’s GrokAI is happy to step in (though one wonders if it is comparable). Over a reasonable transition period, DOD should find a substitute that can do what it needs.
11. Meanwhile, I suppose we can dream and imagine that Congress will step in to legislate. That’s also, obviously, the “right” answer – but in the current political environment, that seems highly implausible.
12. And, finally, more fundamentally, if we don’t like the idea of DOD doing domestic surveillance or deploying autonomous weapons, we need new DOD leadership who will approach those issues differently. Everything Trump touches dies – including faith in the good intentions of a Hegseth-led DOD. If we really want a better approach, we need a better leader.