NATO did not publish a standalone “AI doctrine” on 6 November 2025. What happened instead was predictable and important: the Alliance used the NATO-Industry Forum in Bucharest to press the case for faster, industry‑backed capability delivery and to reiterate existing policy commitments on the responsible use of AI. Those statements matter, but they are not the same thing as a codified doctrine that translates strategy into operational practice across the NATO enterprise.
The most recent formal, Alliance-level policy document that specifically governs AI remains NATO’s Revised Artificial Intelligence Strategy published in July 2024. That strategy updated the Alliance’s 2021 framing and reaffirmed the six Principles of Responsible Use - lawfulness, responsibility and accountability, explainability and traceability, reliability, governability and bias mitigation - while setting aims to accelerate adoption and protect AI-enabled systems. The 2024 strategy is guidance and ambition, not an operational doctrine.
Since mid-2024 NATO components have been filling in implementation layers. Allied Command Transformation (ACT) and the Joint Warfare Centre have run experiments and public-facing writeups that show how AI is being modelled into training, data strategies and exercise design. NATO Headquarters has also published enabling pieces such as a Data Quality Framework that are necessary prerequisites for trustworthy AI, but again these are enablers rather than doctrine that tells commanders when, where and under what legal and operational constraints to employ AI-enabled effects.
Why the distinction between strategy, standards and doctrine matters
Strategy sets ends and priorities. Standards and frameworks set the technical and governance requirements necessary for interoperability and assurance. Doctrine is the operational middle layer - it says how forces should organize, fight, allocate authority and supervise systems in theatre. An AI strategy and a Data Quality Framework are necessary. They do not, by themselves, solve key doctrinal questions: who can authorize an AI-directed kinetic effect in a multinational formation, how are human control assumptions codified across 32 Allies, how are allied liability and export controls reconciled, and how will NATO integrate AI assurance into the NATO Command Structure’s decision cycles? Those are doctrinal questions and require unambiguous, publishable answers.
Lessons from national doctrine that NATO should consider
Recent national-level doctrine and directive efforts show what NATO will need to reconcile at the Alliance level. For example, the UK’s JSP 936 sets out an ethics and assurance architecture for defence AI that embeds governance roles and evidence pathways but has already drawn critiques about where it places oversight and limits. National doctrines tend to be explicit about allocation of responsibility inside a single chain of command. NATO must translate that clarity into multinational rules for delegation, cross-certification and shared assurance. Otherwise interoperability becomes either brittle or meaningless in a contested environment.
Operational gaps that a NATO AI doctrine should close
-
Command authorization and human-machine thresholds: NATO needs a clear doctrine for human oversight and the circumstances under which authority can be delegated to automated tools in time-critical operations. This must be interoperable across national caveats.
-
Interoperability and certification: doctrine must specify minimum assurance baselines, certification paths and reciprocity mechanisms so that Allies can accept each other’s AI-enabled tools in coalition operations. That includes standardized test suites, shared datasets or federated validation methods.
-
Data governance and sovereignty: NATO’s Data Quality Framework is a start, but doctrine must operationalize data access, provenance and sharing rules for classified and unclassified domains to prevent brittle pipelines and adversary exploitation.
-
Cognitive and information operations: doctrine must integrate AI use and countermeasures in the information environment - from detection, attribution and mitigation of synthetic content to guidance on tactical use of influence capabilities in line with international law.
-
Assurance, audit trails and incident response: doctrine should require traceability, logging and post‑event forensics for AI-enabled decisions, and create rapid multinational incident response arrangements for model compromise or adversarial manipulation.
Each of these areas has technical and political dimensions. NATO already recognizes some of them in strategy and supporting initiatives, but operationalization is the job of doctrine.
What to expect next and recommended steps for NATO policymakers
First, expect incrementalism. NATO will continue to publish implementation artifacts - exercises, data frameworks, use-case demonstrations and interoperability experiments - rather than one monolithic “AI doctrine” overnight. The Alliance has signalled this path repeatedly at transformation fora and industry events.
Second, NATO needs a prioritized, phased doctrine process that mirrors military acquisition and training cycles: (1) define minimal operational rules for use cases where AI is already mature and deployed; (2) publish assurance and interoperability standards for those use cases; (3) roll these into accredited training and exercise requirements; (4) iterate based on lessons learned in coalition exercises and operations. Doing these steps in parallel is expensive but necessary to avoid a capability-policy gap that adversaries will exploit.
Third, do not confuse speed with looseness. The Alliance must avoid a binary choice between speed and restraint. Speed without interoperable assurance will fragment coalition operations and increase risk of escalation due to misattribution or algorithmic error. Doctrine can be the mechanism that lets NATO be both faster and safer by standardizing shared guardrails. National efforts such as the UK JSP 936 provide test cases to harmonize around - not templates to copy verbatim.
Bottom line
Bucharest’s NATO-Industry Forum on 5-6 November 2025 reinforced the Alliance’s push to mobilize industry and accelerate capability delivery. That momentum is essential. But a formal, operational NATO AI doctrine that binds strategy to multinational command practice is not yet in the public record. For the Alliance to marry innovation with collective responsibility, the next step must be an explicit doctrinal effort that converts policy principles and technical standards into operational rules for coalition commanders. Absent that work, NATO will have a strategy for AI and a growing set of technical building blocks, but not the operational grammar needed to use those tools with allied cohesion and legal clarity.