Trump and Hegseth Want AI Without Rules
Anthropic’s stand against weaponized AI got them fired by DOD. It's an important opportunity for Democrats.
Generations from now, historians may say that the most existential question of our time was not posed by ICE raids, or Supreme Court decisions, or the wars in Iran and Ukraine, as important as these issues are to us today. They may instead focus on something completely different — how did we manage, and did we survive, the advent of artificial intelligence that in the next few years will be able to do almost everything better and faster than human beings?
So let me interrupt our usual programming to discuss what might be the biggest story so far in this tumultuous year: the brave stand an American AI company, Anthropic, took to prevent the U.S. Department of Defense (DOD) from using its technology in frightening ways, and Secretary of Defense Pete Hegseth’s punishment of that company for daring to demand guardrails on its own product.
Anthropic asked DOD to meet two conditions, which happen to address two of the most basic risks posed by advanced AI: that its AI tools not be used to enable mass surveillance of Americans, or weapons that can kill autonomously. These conditions were consistent with the U.S. military’s stated values and policies. And they were not new; both were part of an “acceptable use policy” contained in a contract that DOD signed with the company a year ago.
The conflict with Anthropic only came to a head when Pete Hegseth decided to remove these guardrails from the existing contract. When Anthropic refused, he designated it a “supply chain risk,” a kiss-of-death sanction previously applied only to foreign companies that threaten U.S. national security, potentially barring it from doing business not just with the Pentagon but with any private company that does business with the U.S. government.
Does Hegseth actually want to engage in the mass domestic surveillance or autonomous robot killing that the contact with Anthropic prohibited, or is he just offended in principle that a company might insist on having principles? All we know for sure is DOD’s stated justification: that contractors should not be able to demand anything beyond that their products will be used for “lawful purposes.” That sounds reasonable to some. If the Pentagon promises to obey U.S. law and the Constitution while using AI, isn’t that good enough?
The problem is that there is no law limiting the development of autonomous weapons. It would be perfectly legal for Hegseth to build an army of German-accented terminator robots programmed to seek out and kill specific enemies abroad with no intervention from a human operator. There is a DOD policy that requires “appropriate levels of human judgment” over the use of such weapons, which many still assume means that a human officer must make the final decision to engage a human target. But the rule is ambiguous — it could also mean that humans just make broad decisions about how, when, and why a weapon may be fired, while letting the AI select specific targets.
What’s more, the main current utility of AI in military targeting is not control of the weapon itself. It’s that AI can rapidly analyze huge amounts of information, from written intelligence reports, to telephone intercepts, to satellite and facial recognition images, to identify targets and their patterns of movement, and then produce “targeting packages” laying out how, when, and where to kill them, which a human officer just has to approve. The better AI becomes, the more likely those humans will be to rubber stamp its decisions, especially if they’re under pressure and have no time to review and second guess the data it is using.
As for Anthropic’s second concern — that its tools could be used for mass surveillance of Americans — there are restrictions on what the government can do, from the Constitution’s Fourth Amendment to the Foreign Intelligence Surveillance Act. But AI could enable scary new forms of spying not covered by those laws. For example, the government needs no warrant to look at what we say and do in public, like when we’re walking down a street, or attending a political protest, or venting on social media. With advanced AI, government snoops could do something previously impossible — compile and analyze every available piece of data on every person, including facial images and transcripts of conversations recorded in public — to identify, catalogue, and harass their political foes.
A reasonable critic of Anthropic might say “OK, that would be bad, but in a democracy, the laws should be made by our elected officials, not imposed on us by a private company.” Fair enough, and Anthropic’s CEO, Dario Amodei, seems to agree — he’s the one AI billionaire still urging lawmakers to safeguard the technology. But the others have launched a Super PAC to pressure Congress — and bribed Trump with ballroom and campaign donations — to prevent passage of any effective laws or rules. So the AI companies remain free to deploy a super intelligence that will upend our lives, constrained by less regulation than a local pizza shop.
And when one American corporation asks for ethical limits on the use of its products, even if that means limiting its own profits, our government’s reaction is not just to avoid business with it, but to try to destroy it. Trump wouldn’t ban TikTok (despite a law requiring that), but he called Anthropic a “radical left woke company” and threatened “civil and criminal consequences” to compel it to stop asking for safeguards. Hegseth hasn’t declared a single Chinese AI company a “supply chain risk” — only an American company, for engaging in what is effectively speech he doesn’t like.
The Trump administration’s true position is not that “elected officials should write the AI rules.” It’s that there should be no rules. Sometimes they’ll justify this stance by saying ethical limits will slow American companies down and let China win the race to “AI dominance.” But then they weaken restrictions on selling China the advanced chips it needs to beat us. The only common denominator is letting technology companies make as much money as possible, except for the one company that dared to raise a principled objection.
In this dangerous situation, there is a political opportunity for Democrats. Support for AI safety rules — even if it slows down AI development — is one of the few causes that unites Americans across party lines. Democrats in Congress should press to codify not just Anthropic’s proposed limits on military use, but laws addressing the full range of concerns Americans have about privacy, children’s safety, copyright protections, the future of work, and our ability to tell what’s real from what’s fake. If Trump and his allies block regulation, Democrats should run on it in the Congressional midterms, and in 2028.
Their mantra should be: “The Republicans want to build soul-crushing, job-destroying, freedom-ending AI. We want America to lead the world to safe AI — and build a wall around China or any country that races ahead irresponsibly.”
Tom Malinowski is a former member of Congress from New Jersey who was an assistant secretary of state in the Obama administration.





We need not discuss Hegseth.
But I think it in the countries best interest to discuss Anthropic’s product.
I think we will be needing more than a few experts.
Anyone notice a pattern in our biggest problems? We’ve got Epstein-class people who don’t want any rules, AI corporations that don’t want any rules, oil companies successfully dismantling all the rules, and the Trump administration that follows no rules (oh, and don’t forget they are all fabulously wealthy). And isn’t this what Ayn Rand really admired—people who don’t allow themselves to be fenced in by rules? This really could be our path to self-destruction. I recommend Anand Giridharadas’ piece on “optionality” at The Ink: https://the.ink/p/epsteins-network-of-bystanders