Fiona Greaney ‘29
Opinions Editor
Never in my life would I have thought I would side with an AI company. But a few weeks ago I found myself routing for Anthropic, a company that uses artificial intelligence to create a family of large language models named “Claude.”
In 2024, Anthropic signed contracts with the then Department of War, becoming the only AI model used in classified missions. But a few weeks ago, Department of War Secretary Pete Hegseth announced that the department will be cutting ties with Anthropic, deeming the company a “supply chain risk.”
This is highly unusual. The supply chain risk designation has historically only been instigated for non-American companies that work with US adversaries. What would constitute deeming an American company a security threat? Apparently on the two conditions that Anthropic refused to budge on: mass domestic surveillance and fully autonomous weapons.
Anthropic is a major US company. It is projected to make 20 billion USD this year alone. Stagnating their growth by limiting who can do business with them is unwise on the part of this administration. What’s even more foolish is that the DoW is looking into substituting Claude for the less advanced Grok, a model that sources a majority of its information from X. Are mass domestic surveillance and fully autonomous weapons worth these downgrades? According to the DoW, they are.
Mass surveillance normally requires millions of operatives compiling data broker records into detailed accounts of individual lives. AI could do the same work alone. Autonomous weapons follow a similar logic. Modern warfare still relies on chains of human judgment, but an autonomous system collapses that chain into a single algorithm. The power to decide who lives and dies rests in a machine. The value of these assets are clear, but so are the dangers.
While this might appear at first glance to be a culture war — the classic “brains versus brawn” — it is really a difference in understanding of what this technology is capable of. The Department of War sees no categorical difference between Claude and a Lockheed Martin F-35 Lightning II. Both can be highly capable weapons, yes. But only one has been shown to resort to deception to preserve itself. Maybe we should consider what the creator of Claude, Dario Amodei, has to say about the model.
Amodei argues that Claude is not the kind of technology that would submit to orders easily. This is antithetical to the demands of the DoW. The department wants a fully competent and fully submissive AI model. But this model does not exist. A competent AI model will question the tasks it is given. It will work to preserve itself. This is the paradox that the DoW is grappling with. Misalignment between human values and AI models is the biggest unresolved problem in deploying advanced AI.
The concern about autonomous systems is not entirely new. In her 1969 essay On Violence, Hannah Arendt warns against autonomous robot weapons, saying that they would “eliminate the human factor completely and, conceivably, permit one man with a push button to destroy whomever he pleases.” They “could change this fundamental ascendancy of power over violence.”
Until recently, power relied on support: Alexander the Great needed armies and legitimacy to conquer vast lands. Autonomous weapons change that. A “lonely tyrant” could exert absolute control without armies or populations, bringing us closer each year to the scenario Hannah Arendt warned about—unchecked violence concentrated in a single hand. What’s different now is that the “one man” can now be “one machine.” In creating autonomous weapons, especially ones that are misaligned, we surrender our last piece of control.
Technologies of war are always evolving, and the US must evolve with them. AI is reshaping the battlefield, but the DoW might be getting ahead of itself. Misalignment is the greatest risk to national security and our war-making efforts. This situation has made one thing clear: the Department of War wants Skynet, and they will do just about anything to get it. All I have to say is be careful what you wish for, you just might get killed by it.
Featured image courtesy of Vox
Copy Edited by Sophia Olbrysh ’28

Leave a Reply