The Anthropic–Pentagon confrontation didn’t just expose a disagreement about safety guardrails. It revealed a deeper structural crisis: no one actually knows who governs AI in war. Governments claim authority. Companies claim responsibility. International institutions claim oversight. But none of these claims are fully backed by law, precedent, or shared norms.
This is why scholars at Lawfare, Opinio Juris, and the United Nations describe the current moment as a “governance vacuum.” AI has outpaced the institutions meant to regulate it, especially in military contexts.
Three competing authorities
The sovereignty problem emerges from a three‑way power struggle:
- Governments argue that national defense is a sovereign function, and AI used in war must ultimately be under state control.
- Private AI companies argue that they have a moral and legal obligation to prevent misuse, especially when international humanitarian law is at stake.
- International institutions argue that neither governments nor companies alone should set the rules for autonomous weapons or AI‑enabled targeting.
Each actor has legitimate concerns — and each has blind spots.
Why governments claim authority
Governments argue that democratic legitimacy gives them the right to decide how AI is used in war. This position is reflected in U.S. military doctrine and echoed in analysis by CSIS and CNAS.
From this perspective, Anthropic’s refusal to loosen Claude’s guardrails is seen as a private actor overriding democratic decision‑making. If a model can veto a mission, then the military is dependent on a corporation’s ethics — not the will of elected leaders.
Why companies claim responsibility
AI companies argue that they have a duty to prevent catastrophic misuse. Anthropic’s stance aligns with warnings from the International Committee of the Red Cross and the UN Office for Disarmament Affairs, which caution that autonomous weapons risk violating humanitarian law.
Companies also fear precedent. If they remove guardrails for one government, they may be pressured to do so for others — including authoritarian regimes. In a global market, ethics cannot be selectively applied.
Why international law is struggling to keep up
International law was not built for AI‑driven warfare. Treaties governing weapons systems assume human decision‑makers, predictable behavior, and clear lines of accountability. AI breaks all three.
Efforts to regulate autonomous weapons through the Convention on Certain Conventional Weapons have stalled for years. Meanwhile, states are rapidly integrating AI into targeting, surveillance, and command systems.
The result is a world where:
- Governments claim authority without clear legal limits.
- Companies impose guardrails without democratic oversight.
- International institutions warn of risks but lack enforcement power.
The sovereignty vacuum
The Anthropic–Pentagon conflict is the first visible manifestation of a sovereignty vacuum that has been building for years. AI is now powerful enough to influence war, but no institution has clear, uncontested authority over how it should be used.
This vacuum creates instability. It means that AI governance is shaped by ad‑hoc confrontations, political pressure, and corporate ethics — not by coherent rules. As Brookings notes, this is a recipe for unpredictable escalation.
Why this matters for builders
If you build with AI, you are downstream of this sovereignty crisis. Centralized models are not just technical dependencies — they are political actors. Their behavior can change based on elections, geopolitics, or internal ethics debates.
That’s why Playnex emphasizes context sovereignty: the idea that your workflows should be portable, local‑first, and model‑agnostic. When your context is yours, you are insulated from the sovereignty battles happening above you.