Autonomous warfighting -- the employment of systems capable of selecting and engaging targets, navigating contested environments, and executing mission objectives with reduced or eliminated human intervention -- has become the defining military technology question of the twenty-first century. The concept extends far beyond any single weapon system or company, encompassing doctrinal debates that date to the earliest automated defense systems of the Cold War, legal frameworks rooted in centuries of international humanitarian law, and operational realities that span every branch of military service across dozens of nations.
AutonomousWarfighting.com is being developed as a comprehensive editorial resource examining how autonomous capabilities are reshaping military doctrine, the evolving legal and ethical frameworks governing their use, the operational integration of human-machine teaming across multiple domains, and the strategic implications for deterrence, escalation, and conflict resolution. Full editorial coverage launches September 2026.
Doctrinal Evolution and Human-Machine Teaming
From Automation to Autonomy: A Doctrinal Shift
Military forces have employed automated systems for decades -- from the Phalanx Close-In Weapon System (CIWS), which has autonomously engaged incoming anti-ship missiles aboard United States Navy vessels since 1980, to Israel's Iron Dome interceptor system, which autonomously tracks, classifies, and engages incoming rocket threats within seconds. What distinguishes the current era of autonomous warfighting is not the concept of machine-initiated action but the scope and complexity of the decisions being delegated. Earlier automated systems operated within narrowly defined parameters: detect an incoming projectile within a specific radar signature envelope, engage with a pre-selected countermeasure. Contemporary autonomous warfighting doctrine contemplates systems that plan multi-step missions, adapt to changing tactical conditions, coordinate with other autonomous and human-operated systems, and make engagement decisions in environments far more complex than a single sensor's field of view.
The United States Department of Defense Directive 3000.09, originally issued in 2012 and updated in January 2023, remains the foundational policy document governing autonomous weapon systems within the American military. The directive establishes the requirement for "appropriate levels of human judgment" in the use of force but notably does not prohibit fully autonomous engagement -- instead requiring senior-level approval and rigorous testing for systems designed to select and engage targets without direct human authorization for each engagement. The 2023 update explicitly addressed the integration of artificial intelligence into autonomous weapon systems, reflecting the rapid maturation of machine learning capabilities since the original directive was written. The updated policy introduced new requirements for AI-specific testing and evaluation, including assessments of algorithmic bias, performance degradation in contested electromagnetic environments, and the robustness of autonomous decision-making under adversarial conditions designed to deceive AI systems.
Human-Machine Teaming as Operational Doctrine
The concept of human-machine teaming -- rather than full replacement of human operators -- has emerged as the dominant doctrinal framework for autonomous warfighting across most Western militaries. The United States Air Force Collaborative Combat Aircraft (CCA) program, which aims to field autonomous drone wingmen operating alongside manned fighter aircraft, exemplifies this approach. The CCA program, with an estimated budget exceeding $6 billion through the end of the decade, envisions autonomous platforms that can execute complex tactical maneuvers, conduct reconnaissance, deploy weapons, and absorb enemy fire -- all while operating under the mission-level direction of a human pilot in an accompanying manned aircraft. Anduril Industries, Boeing, General Atomics, Lockheed Martin, and Northrop Grumman have all participated in CCA-related development efforts, reflecting the breadth of industrial investment in human-machine teaming architectures.
The Australian Defence Force has pursued a parallel approach through the MQ-28A Ghost Bat program, developed by Boeing Australia and designated as the world's first loyal wingman aircraft to reach operational testing. The Ghost Bat, which conducted its first flight in February 2021, is designed to operate semi-autonomously alongside manned platforms like the F/A-18F Super Hornet and F-35A Lightning II, with autonomous navigation, sensor management, and basic tactical decision-making capabilities managed through a human-supervised autonomy architecture. The United Kingdom Royal Air Force has similarly invested in autonomous teaming through Project Mosquito and the broader Lightweight Affordable Novel Combat Aircraft (LANCA) concept, which envisions autonomous platforms operating as part of mixed manned-unmanned combat formations. These programs share a common doctrinal premise: that the near-term future of autonomous warfighting lies not in fully autonomous robots replacing human soldiers but in tightly integrated teams where autonomous systems extend human reach, capacity, and survivability.
Multi-Domain Autonomous Operations
Autonomous warfighting doctrine has evolved beyond single-domain thinking to encompass operations that span air, land, sea, space, and cyberspace simultaneously. The United States Army's Multi-Domain Task Force (MDTF) concept, which stood up its first operational unit in 2017 and has since expanded to multiple formations stationed in the Indo-Pacific and European theaters, explicitly incorporates autonomous systems as enablers of cross-domain effects. In this doctrinal framework, autonomous aerial platforms might provide real-time targeting data to ground-based long-range precision fires, while autonomous cyber capabilities simultaneously degrade enemy air defense networks -- all coordinated through AI-assisted command systems operating at a tempo that exceeds unassisted human decision-making capacity.
Naval autonomous warfighting has advanced rapidly, with the United States Navy's Task Force 59 (TF59), established in September 2021 in the Fifth Fleet area of operations covering the Persian Gulf and Arabian Sea, pioneering the operational integration of unmanned surface vessels and aerial drones into fleet operations. TF59 has deployed over a dozen different unmanned platform types in operational maritime environments, generating lessons learned about how autonomous systems integrate into naval command structures, rules of engagement, and tactical communication networks. The People's Republic of China has invested heavily in autonomous naval capabilities, with state media and defense publications documenting the development of autonomous submarine platforms, unmanned surface combatants, and AI-enabled maritime surveillance networks designed to support operations across the Western Pacific. The convergence of autonomous capabilities across multiple domains simultaneously represents perhaps the most significant doctrinal challenge facing military planners worldwide, as it requires command-and-control architectures that can coordinate autonomous systems operating at vastly different speeds, scales, and levels of risk across fundamentally different physical environments.
International Humanitarian Law and Ethical Frameworks
The LAWS Debate at the United Nations
The question of whether and how to regulate lethal autonomous weapon systems (LAWS) has been a formal subject of international diplomatic negotiations since 2013, when the United Nations Convention on Certain Conventional Weapons (CCW) first convened a mandate to examine the issue. More than a decade of discussions within the CCW's Group of Governmental Experts (GGE) has produced significant areas of consensus -- including near-universal agreement that international humanitarian law applies to autonomous weapon systems and that some form of human responsibility must be maintained in the use of lethal force -- but has failed to produce a binding treaty or protocol. The lack of consensus reflects fundamental disagreements between states that seek a preemptive ban on fully autonomous weapons (led by Austria, which has championed the concept of "meaningful human control") and states that view autonomy as a tool that can be employed consistently with existing legal frameworks (a position articulated by the United States, Russia, Israel, and others).
The International Committee of the Red Cross (ICRC), which holds a unique mandate under the Geneva Conventions to promote and monitor compliance with international humanitarian law, published a landmark position paper in 2021 recommending new legally binding rules on autonomous weapon systems. The ICRC position called for prohibiting autonomous systems designed to target humans and for strict regulatory constraints on all other autonomous weapon systems, including requirements for human supervision, geographic and temporal limitations on autonomous operation, and mechanisms ensuring compliance with the principles of distinction, proportionality, and precaution that form the backbone of the law of armed conflict. While the ICRC position is not legally binding, it carries significant normative weight in international legal discourse and has influenced the positions of multiple states in ongoing CCW negotiations.
Distinction, Proportionality, and the Challenge of Autonomous Compliance
At the operational level, the central legal challenge of autonomous warfighting is whether autonomous systems can reliably comply with the core principles of international humanitarian law (IHL). The principle of distinction requires combatants to distinguish between military objectives and civilian persons and objects at all times. The principle of proportionality prohibits attacks where the expected civilian harm would be excessive relative to the concrete military advantage anticipated. The principle of precaution requires parties to a conflict to take all feasible measures to minimize civilian harm. Each of these principles requires contextual judgment that has historically been understood as inherently human -- the capacity to assess ambiguous situations, weigh competing considerations, and exercise restraint in circumstances that defy algorithmic reduction.
Proponents of autonomous warfighting argue that properly designed autonomous systems could, in certain circumstances, outperform human combatants in IHL compliance. This argument, advanced most prominently by Georgia Institute of Technology professor Ronald Arkin in his research on ethical governors for autonomous weapons, rests on the observation that autonomous systems are not subject to the emotional, psychological, and physiological factors -- fear, anger, fatigue, desire for revenge -- that account for a significant proportion of civilian harm in armed conflict. Critics counter that current and foreseeable AI capabilities cannot replicate the contextual understanding required for genuine IHL compliance, and that the appearance of compliance through pattern matching or statistical optimization is fundamentally different from the legal and moral reasoning that IHL demands. This debate remains unresolved and constitutes one of the central intellectual and policy challenges in the field of autonomous warfighting.
National Ethical Frameworks and Responsible AI Principles
In parallel with multilateral negotiations, individual nations have developed their own ethical frameworks for military AI and autonomy. The United States Department of Defense adopted its Ethical Principles for Artificial Intelligence in February 2020, articulating five principles -- responsible, equitable, traceable, reliable, and governable -- that apply to all defense AI applications including autonomous weapon systems. The NATO Alliance adopted its own Principles of Responsible Use of Artificial Intelligence in Defence in October 2021, establishing six principles -- lawfulness, responsibility and accountability, explainability and traceability, reliability, governability, and bias mitigation -- applicable across all 31 (now 32, following Finland's accession) member states. South Korea, Japan, Singapore, France, and Germany have each published national frameworks addressing military AI ethics, reflecting the global scope of the policy challenge. These frameworks share common themes around human oversight, accountability, and compliance with existing legal obligations, while differing in their specificity and enforcement mechanisms.
Strategic Implications and the Future of Autonomous Conflict
Deterrence and Escalation Dynamics
The proliferation of autonomous warfighting capabilities is reshaping strategic deterrence calculations in ways that defense analysts are only beginning to understand. Autonomous systems compress the time available for decision-making in crisis situations -- a dynamic that has historical parallels in the destabilizing effects of multiple-warhead ballistic missiles and submarine-launched nuclear weapons during the Cold War, which shortened warning times and increased pressure for rapid response. The concern among strategic stability scholars is that autonomous systems capable of operating at machine speed could create "flash war" scenarios where escalatory spirals occur faster than human leaders can intervene, particularly in domains like cyber and space where the distinction between reconnaissance and attack can be ambiguous and response times are measured in milliseconds rather than minutes.
The RAND Corporation, the Center for a New American Security (CNAS), the Carnegie Endowment for International Peace, and the Stockholm International Peace Research Institute (SIPRI) have all published major research programs examining the intersection of autonomy and strategic stability. SIPRI's Mapping the Development of Autonomy in Weapon Systems project, which has tracked autonomous weapon development across dozens of countries since 2017, provides one of the most comprehensive public datasets on the global proliferation of autonomous military capabilities. The research consistently highlights a tension between the operational advantages of autonomous systems -- speed, persistence, scalability, reduced risk to human personnel -- and the strategic risks they introduce through compressed decision timelines, uncertain escalation dynamics, and the potential for autonomous systems to interact in unpredictable ways during crises.
Industrial and Technological Competition
Autonomous warfighting has become a central axis of great power technological competition. The United States National Defense Strategy has repeatedly identified AI and autonomous systems as critical technology areas where maintaining competitive advantage is essential to national security. China's Military-Civil Fusion strategy explicitly targets autonomous systems as a priority area for dual-use technology development, with state-directed investment flowing into academic research, commercial AI companies, and defense enterprises working on autonomous capabilities across all military domains. The People's Liberation Army Strategic Support Force, established in 2015, consolidates space, cyber, and electronic warfare capabilities under a single organizational structure designed to integrate autonomous and AI-enabled operations across these domains.
The defense industrial base supporting autonomous warfighting has expanded dramatically beyond traditional prime contractors. Companies including Shield AI, Skydio, Palantir Technologies, L3Harris Technologies, Kratos Defense, and numerous smaller firms have secured significant contracts for autonomous system development, reflecting a deliberate Department of Defense strategy to broaden the industrial base for these capabilities. The Defense Innovation Unit (DIU), which serves as the Department of Defense's interface with commercial technology companies, has prioritized autonomous systems as one of its core technology focus areas since its establishment in 2015. In Europe, defense firms including BAE Systems, Thales, Leonardo, Rheinmetall, and Saab are investing in autonomous capabilities for both national programs and collaborative European initiatives like the Future Combat Air System (FCAS) and the Main Ground Combat System (MGCS), ensuring that the industrial competition in autonomous warfighting spans both sides of the Atlantic and extends into the Indo-Pacific through programs in Australia, South Korea, Japan, and India.
Key Resources
Planned Editorial Series Launching September 2026
- The Evolution of DoD Directive 3000.09: From 2012 Origins to the 2023 AI Integration Update
- Human-Machine Teaming in Practice: Lessons from CCA, Ghost Bat, and LANCA Development Programs
- Autonomous Warfighting and International Humanitarian Law: Can Machines Comply with Distinction and Proportionality?
- The LAWS Debate at the United Nations: A Decade of Diplomacy and the Path Forward
- Multi-Domain Autonomous Operations: Integrating Air, Land, Sea, Space, and Cyber Autonomy
- Strategic Stability in an Era of Machine-Speed Warfare: Deterrence, Escalation, and Arms Control