The delegation of critical decisions to machines introduces new challenges, primarily revolving around trust, which this post charts a path forward for cultivating in autonomous systems. We propose a multi-pronged approach encompassing transparent algorithms, rigorous validation and testing, continuous human oversight, and adaptive feedback mechanisms. The findings underscore the need for a balanced synergy between human judgment and machine efficiency, suggesting that the future lies not in choosing between humans and machines, but in harmonizing their collaborative potential.

The modern battlefield is no longer defined solely by human power but increasingly by the sophisticated symphony of algorithms and machines as great powers such as the United States and China seek advantages by leveraging the mechanization, informatization, and intelligentization of their respective armed forces. In the shadow of historical confrontations, the current strategic rivalry unfolds between contesting superpowers diligently advancing their military technologies through a wide range of next-generation weapons, some of which may be autonomous systems. Humans have been “in the loop” since the dawn of war, although recently have become increasingly efficient in the act. From the first quick-load rifle to today’s multi-billion dollar stealth bomber, technology continues to play an increasing role in creating a military advantage. Throughout our history, this has been people killing people to reach strategic ends. We sit at an interesting intersection in history where this is poised to change. Since the inclusion of GNC algorithms, from pigeon-guided bombs, through remotely piloted aircraft (RPA) deployed to the Middle East and other areas, humans are seemingly being shunted from “in-the-loop” to “on-the-loop” of the kill chain. One could imply that the continuation of this trend will soon have humans “near the loop” or “out of the loop” which are concepts causing concern among technologists and policy experts alike.

With the advent of hypersonic missiles and UAS as examples, warfighter’s duties are increasingly becoming more difficult technologically, and require more stressing timelines than ever before. One can imagine the types of trauma and decision-fatigue that today’s console operators and RPA operators encounter. Operators get tired, but UAS don’t. Human-in-the-loop decision-making can also be slow and costly, limiting its scalability, particularly in high-demand, real-time applications. UAS and other robotic systems are well-suited to do the dull, dangerous, and dirty work, and if the computational capability can handle an increasing number of the decisions, shouldering most- if not all the burden, this will free up resources for other important, and potentially less traumatizing missions.

Knowing the potential benefits, we stand at a historic juncture where the traditional human-centric warfare paradigm is being incrementally supplanted by autonomous mechanisms. As we approach the frontier of AI-driven warfare, we face a dilemma deeply embedded in our cultural zeitgeist — the ‘Terminator Conundrum’. This predicament highlights how the unyielding quest for faster decision-making might overshadow human judgment, giving way to the rule of algorithms in shaping our future. The prospect of the Terminator Conundrum is that through a technological arms race, two opponents are incentivized to speed up decision-making to a point that no human can keep up with, thereby removing rational humans in favor of rule-following machines taking over decision-making.

This conundrum isn’t merely theoretical; it embodies the real-world concerns of decision fatigue among operators and the limitations of human response time, which could potentially be mitigated by UAS. This is an area where additional automation will be welcomed, as computers can respond orders of magnitude faster than humans, with no decision-fatigue. The incentive will be even greater with respect to the warfighter. The inexorable advance of machine decision-making in warfare raises profound questions about the future role of humans in combat. The integration of AI in operations promises unprecedented speed and efficiency, potentially outpacing human cognitive capabilities and reshaping the OODA loop (Observe, Orient, Decide, Act) in combat operations. However, this transition to machine autonomy introduces new ethical and strategic quandaries, particularly concerning trust and control in life-and-death decisions.

RPAs today are currently controlled either through line-of-sight (LOS) or beyond visual line-of-sight (BVLOS) by relay, cellular, or satellite, from a pilot at a ground control station (GCS). Against a near-peer competitor, these comms are increasingly vulnerable to electronic attack, with both competitors incentivized to develop technology that forces the other to “fight in the dark.” This further incentivizes increased UAS autonomy by creating spectrum-contested environments, requiring UAS that can fulfill their mission no matter the status of its support system.

Military Policy in Response to New Paradigms

In response to these new assumptions, the U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by a recent update to the US policy on autonomous weapons systems. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related implementation plan released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.” The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. However, the directive fails to remove ambiguity from several key terms.

China has a similar, nuanced approach to lethal autonomous weapons systems (LAWS), as gleaned from their position papers and military terminology, which reveals a dual narrative of advocating for legal norm applicability while concurrently narrowing the definition to circumvent constraints on its military’s use of AI. This strategic ambiguity serves as a testament to the high-stakes environment where the command and control systems of drones, such as RPAs, now face vulnerabilities to electronic warfare, driving the impetus for increased UAS autonomy.

The 2018 Position Paper from China emphasizes the need to fully consider how general legal norms apply to lethal autonomous weapons. This paper defines autonomous weapon systems in a very limited way, excluding many types of systems. For instance, a weapon system with significant autonomy but some human involvement, capable of differentiating between valid and invalid targets, would not be classified as LAWS under this definition. Similarly, systems with a failsafe mechanism for shutdown during malfunctions are also not considered LAWS. This definition contrasts sharply with the broader understanding of “artificial intelligence weapon” used by the Chinese military. The People’s Liberation Army Military Terminology dictionary describes an AI weapon as one that employs AI to autonomously track, identify, and eliminate enemy targets. This includes a variety of components like systems for information gathering and management, knowledge bases, decision-making assistance, and execution of tasks, exemplified by military robotics. This juxtaposition which illustrates China’s apparent diplomatic commitment to limit the use of “fully autonomous lethal weapons systems” is unlikely to stop Beijing from building its own.

Certain PLA thinkers even anticipate the approach of a “singularity” on the battlefield, at which human cognition can no longer keep pace with the speed of decision-making and tempo of combat in future warfare. While recognizing the importance of human-machine collaboration, and likely concerned with issues of controllability, the PLA could prove less averse to the prospect of taking humans ‘out of the loop’ to achieve an advantage.

These announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in Ukraine and Nagorno-Karabakh: Weaponized artificial intelligence is the future of warfare. Consider the fact that the US and China are in a strategic competition not unlike the previous Cold War between the US and Russia. Both countries are diligently working on strategies to obtain advantages over one another, whether that be militarily, politically, economically, or otherwise. Consider also that neither China nor the US have formally declared LAWS off-limits. It’s possible that this is because both sides inherently recognize the strategic advantage that comes from the quicker, more adaptive, more efficient, and therefore more power projection that comes from successfully integrating autonomous systems and networks of systems into your military. It’s certainly also possible that both sides fear autonomous weapon systems will be used against them, and therefore are developing their own as countermeasures.

Definitions and Frameworks

The International Committee of the Red Cross notes that a “weapon system with autonomy in its critical functions” is one “that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.” The theoretical framework for understanding decision-making authority in UAS involves the concepts of robust autonomy, which is the ability of a system to continue operation in the presence of faults or to safely shut down. This concept draws parallels to the evaluation of human pilots during the licensing process and is crucial for the integration of UAS into national airspace. The framework also encompasses the development of mixed-initiative systems, where decision-making is shared between humans and intelligent software, combining human insight with autonomous control.

Basic Arguments

Proponents of fully autonomous weapons systems argue that the technology will keep soldiers out of harm’s way by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities. Detractors argue that autonomous systems are brittle, and can fail in surprising ways. Since 2017, the Global Partnership on AI has logged “more than 1,200 reports of intelligent systems causing safety, fairness, or other real-world problems,” from autonomous car accidents to racially biased hiring decisions. When the stakes are low, the risk of an AI accident can be tolerable — such as being presented with an uninteresting Netflix recommendation or suboptimal driving route. But in a high-pressure, low-information military environment, both the probability and consequences of AI accidents are bound to increase.

Proponents of fully autonomous weapons systems argue that the technology will keep soldiers out of harm’s way by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities. Detractors argue that autonomous systems are brittle, and can fail in surprising ways. Since 2017, the Global Partnership on AI has logged “more than 1,200 reports of intelligent systems causing safety, fairness, or other real-world problems,” from autonomous car accidents to racially biased hiring decisions. When the stakes are low, the risk of an AI accident can be tolerable — such as being presented with an uninteresting Netflix recommendation or suboptimal driving route. But in a high-pressure, low-information military environment, both the probability and consequences of AI accidents are bound to increase.

Petrov Scenario

The Petrov Scenario refers to a harrowing incident during the Cold War on September 26, 1983, when Soviet military officer Stanislav Petrov was on duty at Serpukhov-15, a secret command center outside Moscow. The Soviet Union’s early-warning systems detected what appeared to be an incoming missile strike from the United States. According to the system, five Minuteman intercontinental ballistic missiles had been launched.

Petrov faced a monumental decision: he could notify his superiors, which, according to protocol, would likely have led to a retaliatory nuclear strike. Given the high tensions between the U.S. and the Soviet Union, this could have escalated into a full-scale nuclear war. However, Petrov had doubts about the accuracy of the system’s report. He noted the system’s relatively new implementation and the unlikely scenario of only five missiles being used as an initial attack, leading him to conclude that it was a false alarm. Petrov’s decision to not escalate the situation despite protocol effectively averted a potential nuclear disaster.

This scenario underscores the critical importance of human judgment in decision-making processes, especially in military contexts where the consequences can be catastrophic. While machines and algorithms can process data and execute protocols rapidly, they lack the capacity for doubt, skepticism, and moral judgment that are intrinsic to human decision-makers.

The Petrov Scenario is often cited as a key example of why human-in-the-loop (HITL) decision-making is essential, particularly in areas where decisions have irreversible consequences. Machines, no matter how advanced, do not possess the ability to understand context or question the validity of their programming in the way humans can.

Introducing MHC

Not all scenarios with autonomous systems should be treated equally. In fact, it’s becoming clear that each scenario necessarily demands its own restrictions, international agreements, conops, and autonomous delegation authority. Already in use are several instances of weapons in use which could be classified as having some level of autonomy, including defensive situations like the Aegis Weapon System, offensive systems designed to target high-value maritime targets like LRASM, anti-radiation homing munitions like the HARPY, and autonomous defensive turrets like the Super aEgis II which South Korea has set up at the DMZ.

In military applications, the balance between autonomy and control is a delicate one. Autonomy can enhance capabilities and responsiveness, but control is needed to manage complex ethical and strategic decisions that machines are not equipped to handle. The necessity for HITL is particularly evident in the use of nuclear command and control systems, where the moral and legal implications of an incorrect decision are profound.

The Petrov Scenario is a compelling argument for the inclusion of human judgment as a safeguard against the limitations and potential failures of automated systems. While advancements in technology can significantly augment military operations, they cannot replace the nuanced and critical decision-making abilities of humans. One set of standards proposed includes designing an interface offering obligatory yes/no options with regards to an attack in order to ensure meaningful human control (MHC). This decision could accompany additional information such as reliability, predictability, information accuracy, and explainability in an easy to read interface, similar to the way that the intelligence community today provides confidence metrics for their assessments. Thus, militaries should strive to maintain a balance where the efficiency of autonomous systems is harmonized with the irreplaceable oversight and ethical considerations provided by human operators.

Under international humanitarian law (IHL), human commanders are expected to demonstrate the reasonableness of their attack decisions to explain and justify their conduct. The black box problem, also known to programmers as non-deterministic programming, that is, the opaqueness of modern machine learning algorithms which prevents any human from understanding their decision-making process, poses severe ethical problems in this regard, as one cannot predict how an AI would pursue its task.

An Abstract Illustration

Even if a human is in-the-loop, the amount of data and necessary speed of decision-making severely limit human oversight. The question of responsibility becomes even more pressing when there are no humans in- or on-the-loop.

To illustrate these challenges, consider a hypothetical “find-fix-finish” mission where a time-sensitive target must be neutralized. In a traditional HITL approach, human operators would analyze intelligence, confirm target identification, and authorize a strike. The latency inherent in this process could allow the target to relocate, rendering the intelligence outdated and potentially leading to mission failure or unintended casualties.

The “find-fix-finish” strategy, central to precision targeting and rapid response in combat, is emblematic of the demand for timeliness and accuracy in decision-making. However, HITL systems inherently encounter bottlenecks that can undermine these operations. The human operator, while crucial for oversight and ethical considerations, also introduces potential delays in the kill chain process due to the need for information processing, decision making, and action — a sequence that machines can execute at significantly higher speeds. The limitations of human cognition mean that as the volume and velocity of data increase, so too does the time required for humans to interpret and act on this information.

In high-pressure scenarios, these bottlenecks are not merely inefficiencies; they represent critical vulnerabilities. The latency in human response can be exploited by adversaries, particularly those using automated systems that function on shorter OODA loops. Thus, the integration of humans in the loop, while providing a layer of accountability and judgment, can ironically become a liability in scenarios where speed is of the essence.

Cognitive biases in HITL systems further complicate the decision-making process. Under the duress of combat, even the most trained individuals are susceptible to errors in judgment due to factors such as confirmation bias, where one favors information that confirms pre-existing beliefs or hypotheses. In a “find-fix-finish” scenario, this could result in the misidentification of targets or the dismissal of critical data that does not align with the expected pattern.

Response latency is another significant concern. In the time-critical “fix” phase of a mission, where rapid and accurate targeting is crucial, any delay in human response can result in mission failure or collateral damage. These latencies are not solely the result of cognitive processing but also of the communication channels and interfaces between the human operator and the UAS. As the complexity and speed of operations increase, so does the challenge of maintaining minimal response times.

Conversely, granting increased decision-making authority to machines could mitigate these issues. Autonomous systems can process vast datasets and execute decision-making protocols in fractions of the time required by human operators. In our case study, an autonomous UAS equipped with advanced sensors and AI algorithms could identify and engage the target with a speed and precision unattainable by human counterparts.

However, this shift raises significant concerns. The delegation of the “fix” phase to an autonomous system necessitates a robust framework to ensure that the UAS can distinguish between combatants and non-combatants, adhere to the rules of engagement, and make ethically sound decisions. The potential for system malfunctions or adversarial exploitation of autonomous platforms can’t be ignored either.

This transition towards increased machine authority in HITL systems is not a panacea. While it promises to address the limitations of human cognition and response latencies, it introduces new dimensions of risk, particularly in terms of trust, control, and ethical oversight. The challenge lies in navigating these trade-offs, ensuring that the benefits of speed and efficiency don’t come at the cost of accountability and human dignity.

Developing Safety in Parallel

As we continue to advance the capabilities of autonomous UAS, it’s imperative that we develop parallel advancements in AI interpretability, robustness, and ethical decision-making frameworks. Only through a concerted effort to balance the strengths of both humans and machines can we hope to maintain the integrity of HITL systems in the age of autonomous warfare.

The operational efficacy of autonomous systems are highlighted by their potential to minimize the risk of injury to soldiers and increase productivity in combat operations. However, the necessity or desire to involve humans in low-level decision-making, due to technical, legal, or political reasons, presents challenges. Systems that manage multiple craft for surveillance and combat operations demonstrate the need for a nuanced approach to autonomy that allows for human involvement without necessitating continuous human monitoring. This approach has to address the challenges of controlling multiple UAS simultaneously and ensure that the systems can operate effectively in various operational contexts.

Modus Tollens

Modus Tollens is a form of argument that allows one to deduce the falsity of a premise from the falsity of its consequent. This principle has significant implications when applied to the safety of autonomous systems. The crux of the argument is that while we can never conclusively prove that a robot or autonomous system is safe, we can demonstrate its unsafety with a single counterexample. This presents a profound challenge in the certification and trust-building process for UAS.

Modus Tollens follows the structure: If P, then Q. Not Q, therefore not P. Applied to UAS, if we consider P to be “the UAS operates safely under all conditions,” and Q to be “all observed operations are safe,” then a single observation of unsafe operation (not Q) is sufficient to invalidate the claim that the UAS is safe under all conditions (not P).

The implication of Modus Tollens in the context of UAS is that safety cannot be established by exhaustive testing, as it is impossible to test for all possible conditions under which the UAS might operate. Instead, safety is often inferred from the absence of unsafe observations within the tested conditions. However, this inference does not equate to proof of safety; it merely suggests that no evidence of unsafety has been found within a specific context.

The principle that “you can never prove that a robot is safe, only that it is unsafe with one observation” underscores the importance of robust design, extensive testing, and continuous monitoring. It also highlights the need for fail-safe mechanisms that can mitigate the consequences of unforeseen failures. The challenge is to design UAS with the understanding that safety is a probabilistic measure, not an absolute one, and that the systems must be able to respond appropriately to both anticipated and unanticipated hazards.

Requirements for Responsible Design

Addressing this challenge requires a multi-faceted approach:

  • Incorporating layers of redundancy to ensure that a single point of failure does not lead to an unsafe condition.
  • Developing adaptive safety measures that can handle unexpected situations by learning from past operations and simulations.
  • Ensuring that the design and decision-making processes of UAS are transparent, allowing for easier identification of potential safety issues. Transparent designs also parlay into traceability.
  • Establishing comprehensive regulatory frameworks that define safety standards and procedures for UAS operations.
  • Implementing ethical guidelines to ensure that the deployment of UAS does not compromise safety in pursuit of efficiency or functionality.

By acknowledging the limitations of proving safety and focusing on minimizing risk, we can better prepare UAS for integration into complex and dynamic environments. The power of Modus Tollens, therefore, is not just a logical curiosity but a guiding principle in the responsible development and deployment of autonomous systems.

Additionally, by designing robust test and evaluation procedures, researchers can help systems minimize unexpected biases, navigate unpredictable scenarios, and improve training data availability. AI systems can also inherit biases present in their training data, which can then lead to biased target generation. Human decision-makers may not always catch these machine biases, perpetuating discrimination. In novel or rapidly changing situations, AI may not have the ability to generate appropriate targets, while humans can adapt and use critical thinking. AI models rely on data for training and decision-making, so they may struggle in situations where data is scarce or unreliable.

On Building Trust

The rapid advancement of autonomous systems, particularly in UAS presents a paradoxical situation in modern warfare and strategic operations. As these systems gain more advanced capabilities through machine learning and AI, the delegation of increased decision-making authority to machines becomes not just a possibility, but a tangible reality. However, this shift brings to the fore a critical challenge: how to build and maintain trust in machine-led operations.

Trust in autonomous systems, especially in the high-stakes environments mentioned above, is multifaceted. It encompasses reliability, predictability, and transparency. For operators and decision-makers, trusting an autonomous system means believing in its capability to perform as expected, under varying conditions, without unforeseen or unintended consequences. This trust isn’t static; it evolves with experiences and outcomes, requiring continuous validation.

The first dimension of trust is grounded in the system’s performance. Consistent, reliable outcomes from UAS operations bolster operator trust. Inconsistencies or failures, even if minor, can significantly erode this trust. Operators must also understand how the machine arrives at its decisions. Understanding the rationale behind AI-generated targets can be challenging, which may hinder human decision-makers in assessing the quality and trustworthiness of those targets. A “black box” approach, where the decision-making process is opaque, will continue to hinder trust in systems obfuscating rationale. Transparent algorithms that are understandable and interpretable will foster greater confidence. Trust also involves ethical and legal considerations. Humans may face ethical dilemmas when making decisions in sensitive or complex situations, and AI models lack the ability to navigate these nuances effectively. Machines operating autonomously must adhere to established ethical norms and legal frameworks. Any deviation could lead to a significant trust deficit. Lastly, while increasing machine autonomy, it is essential to maintain a level of human oversight. This control, whether direct or supervisory, assures operators that they can intervene or alter outcomes if needed, which will feedback to the aforementioned building trust.

In a typical “find-fix-finish” mission scenario, the implications of trust are pronounced. Such missions demand precision, speed, and often, discretion. Granting machines greater decision-making authority in these missions can enhance operational efficiency but also raises questions about the machines’ ability to adapt to complex, dynamic, and cluttered environments. AI systems struggle with ambiguity and may generate targets that are too literal or fail to capture the intended meaning in certain situations. Trust in this context also involves confidence in the machine’s ability to distinguish between legitimate targets and non-combatants, a decision traditionally reserved for human judgment. AI models may lack context and may generate targets that are technically correct but socially or culturally inappropriate. Humans are still king at understanding nuances and context.

Recommendations

Developing algorithms with transparency in mind and subjecting them to rigorous validation and testing is fundamental. This process includes not just technical validation but also ethical and legal reviews. Incorporating feedback mechanisms that allow machines to learn from outcomes and human inputs can improve their decision-making over time, increasing trustworthiness. Fostering an environment where humans and machines work in tandem, leveraging the strengths of each, can lead to more effective and trustworthy operations. Engaging a broader range of stakeholders, including operators, ethicists, legal experts, and society at large, in discussions about autonomous systems’ role and governance can help in shaping a more trusted framework for machine-led operations.

Ethical considerations in the delegation of decision-making authority to UAS are also complex and multifaceted. The moral agency and accountability of autonomous UAS loom at the forefront, especially in the context of military operations where the use of LAWS is a contentious issue. The ethical implications of shared decision-making between humans and autonomous systems are driving research programs to understand the balance between human control and autonomous capabilities. This balance is critical to ensure ethical constraints are respected and that the systems operate within the bounds of legal and political acceptability (Barnes, Chen, & Hill).

As autonomous platforms continue to gain prominence in military operations, ensuring the reliability of machine decisions becomes paramount. Reliability in this context means the consistent accuracy and appropriateness of decisions made by autonomous systems, especially in dynamic and uncertain environments. The foundation of reliable machine decisions lies in the robustness of the algorithms driving these decisions. This involves designing algorithms that can handle a wide range of scenarios, including edge cases, and ensuring that they are resistant to errors and biases. This implies that rigorous testing under diverse conditions is essential. This includes not only controlled simulations but also field tests in realistic environments to validate the decision-making capabilities of the machines. Autonomous systems must also be capable of learning from past experiences and adapting to new situations, or incorporate transfer learning, where learning in other environments and on other similar machines can be shared. This involves incorporating advanced machine learning techniques that enable systems to evolve and self-improve over time.

Implementing real-time monitoring systems that allow human operators to oversee machine operations in a transparent manner will be paramount. These systems should provide comprehensive insights into the machine’s decision-making process, including the rationale behind specific actions. Equipping UAS with alert systems that notify operators of critical decisions or anomalies, coupled with the ability for humans to override machine decisions, should also be required. This safeguard ensures that humans remain the ultimate decision-makers, especially in high-stakes situations. This implies an interesting corollary that humans would remain fully responsible for not intervening in an autonomous system that makes a mistake. Unfortunately, responsibility is largely outside the scope of this post. Lastly, maintaining detailed logs of machine actions and decisions (audit trails) will be essential for accountability and post-mission analysis. These records will enable operators and analysts to review and understand the machine’s actions, contributing to trust and reliability, and feeding back to the engineers to support updates.

While autonomous systems are designed to operate independently, the role of human judgment remains critical. It’s essential to balance autonomy with a level of human oversight that allows for intervention when necessary, for instance, when lethal force is recommended. Certain decisions, especially those involving ethical and moral considerations, should remain within the purview of human judgment. Machines, despite their advanced algorithms, may not fully comprehend the nuances of ethical warfare and the value of human life. Human judgment also plays a key role in interpreting complex, ambiguous situations where contextual understanding and flexibility are required. Humans can consider a broader range of factors and potential consequences than current autonomous systems.

The heart of the debate among experts revolves around additional guidance or rules that would be specific to LAWS and that would further specify how to ensure that LAWS are developed and used such that IHL compliance is both enabled and facilitated.

It remains prescient to work for a new international treaty that prohibits and regulates autonomous weapons systems. Neither China’s nor the US’ current ambiguous terminology are aligned. As Gregory Allen, an expert from the national defense and international relations think tank Center for Strategic and International Studies, argues, this language establishes a lower threshold than the “meaningful human control” demanded by critics.

Balancing the strengths and limitations of AI target generation with human oversight is critical for ensuring responsible and effective use of AI in various applications. One school of thought argues that technological development can overcome the existing challenges to MHC, as improvements in the field of AI will allow for systems to adhere ever more closely with IHL principles while retaining the advantages of faster decision-making speed and objective decision-making processes. Assuming further advances in AI, one could imagine machines being better at complying with IHL than humans, although this is as yet a distant prospect. The other school of thought argues that IHL should be interpreted as requiring inherent limits to the autonomy of LAWS, because IHL principles can only be fully met based on contextual and ethical judgements by humans. Also, LAWS should not be employed unless MHC can be enforced to close the gap of responsibility. It appears the jury’s still out.

As the intensification of military and strategic competition in AI could result in destabilizing arms race dynamics, the United States and allies should also explore options to mitigate the risks to strategic stability that could result from great powers’ pursuit of AI-enabled capabilities to achieve military advantage.

Furthermore, research and development efforts should be incentivized to study solutions to the difficulties of transparent and explainable AI. If the rise of autonomous weapons systems are inevitable, then they should be transparent, predictable, explainable, and overrideable to the maximum extent possible, in order to generate enough trust to be controllable.

As UAS become more prevalent and capable, the challenge of trust will increasingly define their utility and acceptance. Building this trust requires a multi-dimensional approach, recognizing the complexity of human-machine interactions and the high stakes involved in military and strategic operations. The future of autonomous warfare hinges not just on technological advancements but also on the successful cultivation of trust in these advanced systems.

This post dissected the multifaceted aspects of HITL systems, from the market forces driving and the implications of escalated machine authority in mission scenarios to the vital need for establishing trust in autonomous machines. Through a multi-pronged approach that includes transparent algorithms, rigorous testing, and continuous human oversight, the author is advocating for a balanced synergy that leverages both human judgment and machine efficiency. This investigation culminated in a set of recommendations designed to harmonize the collaborative potential of humans and machines, charting a course for a future where technology enhances, rather than replaces, human decision-making in warfare.

For anyone reading this: All comments critical or otherwise are appreciated. What do you think is the future for military and civilian use of lethal autonomous weapons? How can I improve my writing? Do you have similar stories? What can you tell me about my stories?

On Lethal Autonomous Weapons