Ten years is a long time in defense technology and a short time in force transformation. By 2035 the most realistic pathway to large scale “AI-piloted” fleets is incremental, program-driven, and domain-differentiated rather than an overnight conversion of every ship, aircraft, and submarine into an autonomous agent. The United States and allied militaries already have the policy scaffolding, experimentation pipelines, and industrial activity that make meaningful automation at fleet scale plausible. What follows is an evidence-grounded scenario for 2035, the engineering contours that will enable it, and a short set of prescriptions for defensible adoption.

Executive picture in 2035

  • What an AI-piloted fleet looks like: layered formations in which thousands of attritable unmanned platforms operate alongside hundreds of crewed nodes. Air forces field hundreds to low thousands of collaborative combat aircraft and attritable strike drones distributed across bases and sea platforms. Navies operate networks of medium and small USVs and many hundreds of autonomous logistics and sensing vessels. Ground forces use swarms of low-cost UGVs for logistics, C2 augmentation, and area denial. Those fleets are coordinated by a distributed command fabric that blends human supervision, delegated authority, and machine-speed decision loops.

  • The decisive enabler: scaleable, certifiable autonomy software and integrated mission fabrics that permit safe delegation of tactical decisions without abdicating legal or political responsibility. The Department of Defense has already formalized governance expectations for autonomous weapon systems and tied autonomy development to the DoD Responsible AI and ethical principles, showing that policy will not be the primary blocker to deployment if programs meet rigorous testing and review requirements.

Why 2035 is plausible: the near-term building blocks

  • Mass attritable procurement. The Replicator experiments and follow-on buys demonstrated that the services can move from hundreds to thousands of lower-cost unmanned systems when acquisition, production, and software tooling are aligned. Replicator-style buys, combined with open architectures, create an industrial baseline that can be scaled further into the 2030s.

  • Manned-unmanned teaming maturity. Programs like Boeing’s collaborative combat aircraft work and Kratos XQ-58 family experiments have already proven practical manned-unmanned teaming constructs. Those demonstrators show how a small number of crewed nodes can task and supervise multiple autonomous teammates in contested airspace. By 2035 the tactics, techniques, and procedures that evolved from these flight test campaigns will be codified into operational manuals and training pipelines.

  • Naval operationalization of unmanned surface vessels. The Navy’s creation of dedicated USV squadrons and ongoing Ghost Fleet and medium USV experiments have moved maritime autonomy from ad hoc demos to organization-level experimentation, which is the stepping stone to fleet-scale employment. These efforts show both mission concepts and the logistics models required for distributed seaborne fleets.

A pragmatic architecture for AI-piloted fleets

Designing a dependable AI-piloted fleet in 2035 requires layering responsibility, capability, and resilience into three clear strata:

1) Mission orchestration layer. Human operators retain strategic and operational intent setting. A mission orchestrator service translates intent into task graphs, allocates assets, and enforces rules of engagement and safety constraints. The orchestrator is the system of record for “why” a fleet chose a course of action and is auditable by design.

2) Collaborative autonomy layer. This is the federated AI decision fabric that performs coalition-level sensing, multi-agent task assignment, and tactical planning. Algorithms here are hybrid: model-based planners for constrained physics and rule-governed behavior combined with learned components for perception and local adaptation. Inter-agent contracts enforce minimum safe behaviors and permit graceful degradation.

3) Vehicle-level control layer. Real-time control, closed-loop guidance, sensor fusion, and hard safety monitors run on local compute with certified fail-safes and rollback behaviors. This layer must be hardened for contested sensing: GNSS-denied navigation, adversarial-media hardened perception, and degraded-comm operation.

Technical enablers and constraints

  • Edge compute and energy. Expect specialized inference accelerators in each class of platform by 2035, optimized for real-time perception and decisional workloads with predictable worst-case latency. Power density and thermal limits will still cap sensing and compute budgets on smaller platforms, enforcing trade-offs between autonomy sophistication and attritability.

  • Distributed, resilient communications. AI-piloted fleets will not rely on a single high-capacity backbone. Instead, resilient multi-path fabrics that combine line-of-sight data links, tactical mesh, and space-based relays will be routine. Spectrum management will become a strategic utility. The lessons from recent electronic warfare and GNSS interference demonstrate this is a necessity, not a nice-to-have.

  • Robustness to adversarial effects. Adversarial machine learning and spoofing are not theoretical threats. By 2025 national labs and standards bodies had already prioritized adversarial taxonomy and risk management for operational AI. Any fleet-level autonomy must integrate adversarial testing, model provenance tracking, ongoing red-teaming, and rapid rollback capabilities into release pipelines. The NIST AI RMF and associated adversarial ML work are immediate instruments for that effort.

Operational concepts and numbers (plausible baseline)

  • Attritable air mass. A regional theater force might operate between several hundred and a few thousand attritable CCAs and loitering munitions in support roles. These numbers are driven by industrial capacity, doctrinal thresholds for mass, and logistics footprints. Replicator-like programs proved fielding at scale is possible when funding, production, and software packaging converge.

  • Maritime fleets. Expect squadrons of medium USVs and large numbers of small USVs for sensing and harassment, complemented by dozens of larger logistics or missile-armed USVs for high-end operations. Organizationally, navies will pair traditional carrier or surface action groups with USV squadrons that provide distributed sensors and inexpensive shooter options. USV squadron creations and Ghost Fleet experiments already map this trajectory.

  • Ground and logistics. Large-scale UGV logistics fleets will be less sexy but central: autonomous resupply convoys, forward depot management robots, and robotic engineering teams reduce exposure and increase tempo. These systems will usually be semi-autonomous with a human-in-the-loop for exception handling.

Systemic risks and failure modes

  • Battlefield cyber and supply-chain poisoning. ML systems are only as trustworthy as their training data and supply chain. Poisoned model weights, tainted libraries, or compromised hardware roots of trust can systematically subvert a fleet. Continuous verification, hardware attestation, and independent evaluation authorities will be essential.

  • Comms-denial fracturing. When the communications fabric is degraded, partially disconnected agents must still maintain safe, legal, and tactically sensible behavior. Without rigorous behavior contracts and formal safety envelopes, disconnected agents can take divergent, potentially dangerous actions.

  • Human-machine mismatch. The speed at which machine agents operate will routinely outrun human decision cycles. Incorrect delegation or poorly specified intent can lead to outcomes that are technically legal but politically unacceptable. Clear delegation thresholds and human-supervisor tooling that makes machine reasoning transparent are mandatory.

Policy and governance implications

  • Certification and auditability by design. Autonomous modules must produce compact, auditable records of sensed inputs, the decision path, and constraints used. This is a technical requirement and a legal one. It maps directly to DoD expectations that autonomy be demonstrably safe and accountable.

  • Doctrine first, tech second. Successful scale requires doctrine, training, and organization to adapt before wholesale procurement. Exercises and operational experimentation that pair crewed and uncrewed formations will be the crucible where tactics are discovered and rules of engagement are stress-tested. The pattern in the maritime and air demonstrations is instructive: organizational units were stood up first, hardware second.

  • International norms and escalation control. Widefield employment of AI-piloted fleets will strain existing arms control and maritime/air norms. Confidence-building measures, shared incident reporting, and multinational testbeds for interoperability could reduce the risk of inadvertent escalation.

A short operational checklist for program leads (near-term priorities)

1) Invest now in adversarial ML testbeds, model provenance, and continuous evaluation pipelines. Make red-team results a funding gate. 2) Design mission fabrics with human intent as the primary input, not human override as an afterthought. Build the audit elevator from vehicle logs to theater commanders. 3) Prioritize spectrum resilience and alternate navigation modes. The Ukraine conflict and repeated regional jamming incidents make GNSS denial a baseline threat. Planning should assume degraded satnav as a normal condition. 4) Organize units to own unmanned formations as squadrons or detachments early. Organizational ownership accelerates doctrine and sustainment learning. 5) Fund attritable production lines and modular mission kits to allow rapid scale and recovery from losses. Replicator-style procurement approaches show the budgetary model for this is viable.

Final assessment

By 2035 AI-piloted fleets that mix thousands of attritable platforms with crewed command nodes are technically plausible and operationally attractive for a wide set of missions. The pathway is not purely technical. It is organizational, doctrinal, and regulatory. The two central risks are brittle AI components under adversarial conditions and operational practices that misalign delegation from political accountability. The right sequencing is clear: harden the decision fabrics, codify supervision and audit, scale production on attritable platforms, and run exercises that stress comms, navigation, and ethics. If those steps are followed, 2035 will be a world in which AI-piloted fleets serve as force multipliers rather than force hazards.