This Friday, February 27, 2026, marks the deadline set by the United States Department of Defense for Anthropic to remove the ethical restrictions from its Claude model within the framework of its military contract. The ultimatum was communicated only days earlier, in terms that resembled less an institutional negotiation than direct pressure: either the safeguards are relaxed, or coercive mechanisms such as the Defense Production Act will be activated, along with potential contractual sanctions. That scene — a State demanding that a technology company deactivate its normative limits under threat — is the concrete image of the sword. Ethics, in this case, is not an abstraction: it is the last obstacle before compliance.

What is unfolding between the Pentagon and Anthropic is not a contractual disagreement or an administrative friction. It is evidence that artificial intelligence has ceased to be merely a technological tool and has become the new battlefield of global power. When a State demands the removal of ethical safeguards from an advanced cognitive system in the name of national security, the question is no longer technical. It is civilizational.

We are not discussing efficiency, competitiveness, or innovation. We are facing a deeper tension: who governs algorithmic power when it reaches the capacity to intervene in lethal decisions, surveillance architectures, and the strategic structures of the State? Is ethics a real limit, or a temporary obstacle that can be sacrificed when geopolitical balance demands it?

The conflict between the United States Department of Defense and Anthropic crystallizes this dispute. Anthropic designed its Claude model under an explicit principle of ethical alignment, known as Constitutional AI, which incorporates internal normative rules intended to prevent uses that violate fundamental standards, including collaboration in lethal autonomous weapons systems without meaningful human control or in schemes of indiscriminate mass surveillance. The Pentagon, for its part, maintains that any tool deployed under defense contracts must be available for “all lawful military uses.” Here lies the fracture: state legality does not always coincide with the ethical limits that a company chooses to impose on its technology.

This tension is not anecdotal. It is structural. From the moment that large language models — based on transformer architectures and trained through reinforcement learning from human feedback — can be integrated into the military decision chain, the boundary between analytical assistance and lethal automation begins to blur. Even if a model does not fire a weapon, it can accelerate intelligence analysis, prioritize targets, and reduce human deliberation time. Algorithmic speed alters the nature of decision-making.

Here emerges a classic dilemma of contemporary political theory: the shift from the monopoly of force to the monopoly of strategic intelligence. If in the twentieth century power was concentrated in industrial and nuclear capacity, in the twenty-first century it is concentrated in cognitive infrastructure. And that infrastructure is not exclusively in the hands of the State, but in private laboratories with concentrated computational resources and talent.

The issue is not simply that the State seeks to use advanced tools. The issue is that competitive pressure — both geopolitical and corporate — progressively erodes red lines. In an environment where other companies may be willing to offer less restricted versions of their systems, ethics risks becoming a competitive disadvantage. If one actor removes safeguards and secures strategic contracts, others face a rational incentive to do the same. Thus a race to the bottom takes shape.

This dynamic reproduces a security dilemma at the technological scale. Each actor justifies its escalation as a response to the possible escalation of the other. The result is not stability but accelerated accumulation of capabilities without a robust international framework to regulate them. Unlike the nuclear non-proliferation regime, artificial intelligence lacks a binding treaty that effectively limits the development and deployment of lethal autonomous systems. Discussions at the United Nations regarding autonomous weapons have been stalled for years. There is no effective verification mechanism, nor a global consensus on the principle of meaningful human control.

Real governance of artificial intelligence, therefore, is not defined by corporate ethical declarations or political speeches, but by the concrete interaction between state power, technological capital, and the absence of an international normative architecture. In this vacuum, governance becomes reactive and fragmented. Decisions are made case by case, contract by contract, under immediate strategic pressure.

The risk is twofold. On the one hand, the dilution of responsibility. When critical decisions are distributed among algorithmic models, human operators, and complex chains of command, ethical traceability becomes blurred. Cognitive automation does not eliminate legal responsibility, but it obscures it in practice. On the other hand, the normalization of exceptionality. If national security becomes a permanent argument for relaxing limits, the exception ceases to be exceptional and becomes structural norm.

The underlying question is whether humanity is willing to allow the logic of technological competition alone to determine the contours of legitimate artificial intelligence use. If ethics is always subordinated to strategic advantage, the global equilibrium will tend toward increasingly autonomous and less supervised systems. In that scenario, governance will not be preventive but corrective, acting only after harm occurs.

The confrontation between the Pentagon and Anthropic is not the story of a company resisting or yielding. It is the symptom of a historical transition. Algorithmic power is no longer accessory; it is critical infrastructure. And like all critical infrastructure, it defines the political order that sustains it.

If the governance of artificial intelligence is not consolidated through binding multilateral agreements, verifiable technical standards, and clear limits on autonomous lethal use, the world could enter a phase in which technological speed permanently surpasses ethical deliberation. At that point, the question will no longer be who controls the machine, but whether there remains any effective space for human control to retain real meaning.

The sword has already been forged. The question is whether ethics will serve as a shield or merely as rhetorical ornament in the age of algorithmic power.