Autonomous systems are no longer laboratory curiosities. They are operational platforms, logistics nodes, and weapons system adjuncts across air, land, sea, and cyber domains. That evolution has created a broad, multi‑layered attack surface where conventional software exploits meet physical sensor manipulation and machine learning failure modes. The problem is not an abstract set of vulnerabilities. It is a set of chain reactions: a compromised sensor can induce misperception, misperception can trigger unsafe control outputs, and unsafe outputs in a distributed force multiply into mission failure or unintended collateral effects.
Threat taxonomy: I group the cyber threats to autonomous systems into five operational classes that encompass both classical cyber techniques and the new ML‑driven vectors.
1) Navigation and timing attacks. GNSS jamming and spoofing remain the simplest high‑leverage means to break autonomy that depends on satellite navigation. State and non‑state actors have run persistent spoofing campaigns that have displaced ship and platform positioning, demonstrating that large civilian and military fleets can be deceived or degraded at scale. Attackers need neither privileged software access nor deep AI expertise to cause a navigational failure.
2) Perception and sensor manipulation. Vision, LiDAR, radar, and acoustic sensors are the raw inputs for autonomy. Researchers have shown physical adversarial attacks against vision systems that cause consistent misclassification in the real world, and recent work demonstrates that LiDAR and other 3D sensing streams are also susceptible to stealthy, partial‑information attacks that can degrade tracking and fusion layers. When sensor integrity is lost, sensor fusion cannot be trusted without explicit anomaly detection and run‑time assurance.
3) Communications and command‑and‑control subversion. Unauthenticated or weakly protected control links permit classical hijacking, replay, or man‑in‑the‑middle attacks that convert an asset from friend to foe. The historical automotive example where researchers remotely commandeered vehicle functions remains a useful demonstration of how a remote access vulnerability can cascade into physical danger. The same class of failures, scaled up, is catastrophic in military or critical infrastructure use cases.
4) ML‑specific attacks: poisoning, backdoors, and model extraction. Learning components introduce new supply chain and lifecycle risks. Poisoning training data or inserting backdoors during distributed or federated training can cause a deployed model to behave incorrectly under attacker‑chosen triggers. Model extraction and theft also create strategic risks if adversaries learn classification boundaries and craft transferable black‑box attacks. The literature documents both the feasibility of these attacks and the limited maturity of defenses.
5) Software supply chain and firmware compromises. Autonomous platforms integrate third‑party libraries, middleware, OS images, and firmware. A single compromised module, or the absence of secure boot and code signing, converts a remote exploit into system takeover. The practical attack chain commonly mixes classical flaws with domain‑specific weaknesses in how modules trust each other at run time.
Operational consequences and observed patterns. The real‑world implication of these categories is straightforward: an adversary can convert cyber access into kinetic effects without an overt kinetic breach. Historical demonstrations and field reporting show spoofing and jamming used to create navigational errors for maritime and airborne systems, while academic and laboratory results show the feasibility of forcing perception errors via adversarial perturbations. Together these trends make autonomy brittle unless assurance is treated as an ongoing operational requirement rather than a pre‑deployment checkbox.
Where current mitigation efforts work, and where they fall short. Two technical approaches have delivered measurable benefits.
-
Sensor diversity and hardened fusion. Systems that do not rely on a single sensor modality reduce the success probability of any single‑vector attack. Papers and programs in the assured autonomy community emphasize runtime monitors and constraints that detect sensor inconsistency and fall back to conservative behaviors. These architectures raise the attack cost, but they do not make systems invulnerable.
-
Cryptographic and signal‑level protections. Authenticated C2 links, signed firmware, secure boot chains, and the use of encrypted GNSS signals in military contexts raise the bar considerably. But commercial GNSS receivers and legacy platforms often lack these protections, creating predictable exploitation pathways in mixed fleets. Detection techniques for spoofing that leverage cross‑checks between camera data, inertial sensors, and network path measurements can detect many attacks in practice, yet they require careful integration and testing.
Gaps that require urgent focus. Three engineering and policy shortfalls create systemic risk.
1) Lifecycle assurance for learning components. Traditional V&V is not sufficient for models that adapt or are periodically retrained. The community must operationalize continuous assurance: continuous monitoring, red‑teaming, measurable coverage of input space, and provable run‑time constraints. DARPA and other programs have advanced toolchains for continual assurance, but industry adoption remains uneven.
2) Standardized adversarial evaluation for perception stacks. Academic demonstrations show attack feasibility, but there is no widely accepted, domain‑specific standard for measuring resilience of perception pipelines under adversarial or spoofing conditions. Creating benchmarked tests that combine physical perturbations, sensor deception, and network attacks is a pragmatic first step.
3) Supply chain transparency and software provenance. Without mandatory SBOMs, signed deliveries, and robust vulnerability reporting tied into lifecycle management, platforms will continue to ship with latent, high‑impact flaws. This is a governance and procurement problem as much as a technical one.
Practical roadmap for program managers and engineers. The list below is deliberately concrete and prioritized for defense and critical infrastructure programs.
1) Adopt multi‑layered navigation integrity. Require GNSS cross‑checks with inertial navigation, visual odometry, and cellular/tower‑based localization where available. Validate failover behavior in contested spectrum conditions.
2) Build run‑time assurance and monitors into every ML‑enabled module. Use lightweight, verifiable constraints and anomaly detectors that can place a model into a safe degraded mode when confidence drops or sensor readings diverge. Real‑time monitors should be part of the architecture, not an afterthought.
3) Enforce cryptographic hygiene across control and update channels. Mandate secure boot, signed firmware, and mutually authenticated C2 links. Instrument platforms to verify SBOMs and to accept only authenticated updates.
4) Institutionalize red teaming and adversarial testing. Routine purple‑team campaigns should include physical adversarial tests against perception, GNSS spoofing exercises, and full‑stack intrusion scenarios that emulate realistic attackers. Track remediation metrics and require fixes prior to fielding.
5) Mandate supply chain and data provenance for ML pipelines. Require provenance metadata for training datasets, mechanisms to detect poisoning during ingestion, and defenses for federated learning when used. Operational ML must be auditable.
Concluding assessment. Autonomous systems promise operational advantage through scale and persistence, but they invert the cost model for defenders. Attackers need comparatively modest investment to create outsized effects if systems lack continuous assurance. The correct posture is defensive depth: layered sensors, cryptographic protections, lifecycle verification for learning components, and an operational discipline that assumes compromise is possible. Programs such as DARPA’s Assured Autonomy and NIST’s work on autonomous systems assurance point toward practicable engineering directions. Adoption at scale, combined with procurement and regulatory requirements that prioritize security and provenance, is the only plausible route to resilient deployed autonomy.