Resolved: States ought to ban lethal autonomous weapons (bibliography)

General Morality

The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons (2016). This article reviews the major moral objections to autonomous weapons systems.

The Ethics of Acquiring Disruptive Technologies Artificial Intelligence, Autonomous Weapons, and Decision Support Systems (2018)

An Ethical Analysis of the Case for Robotic Weapons Arms Control (2019). While the use of telerobotic and semi-autonomous weapons systems has
been enthusiastically embraced by politicians and militaries around the world, their deployment has not gone without criticism. Strong critics such as Asaro (2008), Sharkey (2008, 2009, 2010, 2011, and 2012) and Sparrow (2007, 2009a, 2009b, 2011) argue that these technologies have multiple moral failings and their deployment on principle must be severely limited or perhaps even eliminated. These authors and researchers along with a growing list of others have founded the International Committee for Robot Arms Control as a means for advancing their arguments and advocating for future talks and treaties that might limit the use of these weapons. Others such as Arkin (2010), Brooks (2012), Lin, Abney and Bekey (2008, 2012),
Strawser (2010), have argued that there are some compelling reasons to believe that, at least in some cases, deployment of telerobotic and semi-autonomous weapons systems can contribute to marginal improvements to the state of ethical and just outcomes in armed combat. This presentation will trace the main arguments posed by both sides of the issue. Additionally this paper will suggest certain considerations motivated by the philosophy of technology that might be worthy of addition to future robotic arms control treaties. This position argues that these technologies throug the process of reverse adaptation can change our notions of just war theory to the point that caution in their use is recommended until further analysis of these effects can be accomplished. A realistic stance towards robotic weapons arms control will be argued for without losing sight of the positive role these technologies can play in resolving armed conflict in the most just and ethical manner possible.

General

CRS Report R44466, Lethal Autonomous Weapon Systems: Issues for Congress, by Nathan J. Lucas.

Artificial intelligence and national security (2020).

The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development (2019)

Autonomous weapons and the new laws of war (2019)

Autonomous weapons systems and international crises (2018).

How Intelligent Drones Are Shaping the Future of Warfare (2017)

Robot Wars: US Empire and geopolitics in the robotic age (2017).

How will the robot age transform warfare? What geopolitical futures are being imagined by the US military? This article constructs a robotic futurology to examine these crucial questions. Its central concern is how
robots – driven by leaps in artificial intelligence and swarming – are rewiring the spaces and logics of US empire, warfare, and geopolitics. The article begins by building a more-than-human geopolitics to de-center
the role of humans in conflict and foreground a worldly understanding of robots. The article then analyzes the idea of US empire, before speculating upon how and why robots are materializing new forms of proxy
war. A three-part examination of the shifting spaces of US empire then follows: (1) Swarm Wars explores the implications of miniaturized drone swarming; (2) Roboworld investigates how robots are changing US
military basing strategy and producing new topological spaces of violence; and (3) The Autogenic BattleSite reveals how autonomous robots will produce emergent, technologically event-ful sites of security and
violence – revolutionizing the battlespace. The conclusion reflects on the rise of a robotic US empire and its consequences for democracy

Death by alogorithm (2019)

Army of None: Autonomous weapons and the future of war (2018)

Robotic Drones: Coming to a War Near You (2019)

Drone Warfare: The autonomous debate (2018)

AI Drones and UAVs in the military

Killer Robots Aren’t Regulated — Yet (2019)

How swarming drones will change warfare (2019)

Drones that Kill their Own: Will Lethal autonomous drones make it to the battlefield? (2018)

Autonomous and arial weapons (2019)

Lethal and autonomous: coming soon to a sky near you (2019)

Autonomous systems in the combat environment (2020)

Airforce Betting on new Robotic wingman (2020)

Capitalism

Capitalism, War, and Roboticshttps://www.socialiststudies.org.uk/war%20caprobot.shtml

General Websites

Autonomous Drones Library

AI Warfare @ Law Warfare

Campaign to Stop Killer Robots

China

AI weapons in China’s military innovation (2020)

China is selling lethal autonomous drones (2020)

China and the U.S Are Fighting a Major Battle Over Killer Robots and the Future of AI (2019)

The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order (2018)

Whose regulating the future of warfare? (2019)

UK

The development of autonomous military drones in the UK (2018)

Aff — General

The morality of autonomous robots (2013). While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability, operational questions of chain of command, or legal questions of sovereign borders. We further argue that the answer must be ‘no’ and offer several reasons for banning autonomous robots. (1) Such a robot treats a human as an object, instead of as a person with inherent dignity. (2) A machine can only mimic moral actions, it cannot be moral. (3) A machine run by a program has no human emotions, no feelings about the seriousness of killing a human. (4) Using such a robot would be a violation of military honor. We therefore conclude that the use of an autonomous robot in lethal operations should be banned.

Gariepy, Ryan (2017): https://www.clearpathrobotics.com/2017/08/clearpath-founder-signs-open-letter-un-ban- autonomous-weapons/

Armed robots, autonomous weapons and ethical issues (2018). Lethal autonomous weapon systems pose a grave threat to humanity and should have no place in the world. Moreover, once developed these technologies will likely proliferate widely and be available to a wide variety of actors, so the military advantage of these systems will be temporary and limited (Kayser, 2018). Completely autonomous weapons (LAWS) that leave humans out of the loop, should be banned.

Leveringhaus, Alex (2017), “Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict”:

https://blogs.icrc.org/law-and-policy/2017/10/06/distance- weapons-technology-and-humanity-in-armed-conflict/

McFarland, Mac (2018): Leading AI researchers vow to not develop autonomous weapons:  https://money.cnn.com/2018/07/18/technology/ai-autonomous-weapons/index.html

Wagner, Markus (2014): “The Dehumanization of International Humanitarian Law: Legal, Ethical and Political Implications of Autonomous Weapon Systems”, Vanderbilt Journal of Transnational Law, Vol. 47, pág. 1380: https://www.researchgate.net/profile/Markus_Wagner11/publication/282747793_The_Dehumanization_of_International_Humanitarian_Law_Legal_Ethical_and_Political_Implications_of_Autonomous_Weapon_Systems/links/561b394b08ae78721f9f907a/The-Dehumanization-of-International- Hu

Burt, Peter (2018): “Off the Leash: The development of autonomous military drones in the UK”, Drone Wars UK: https://dronewarsuk.files.wordpress.com/2018/11/dw-leash-web.pdf

Autonomous weapons systems and the laws of war (2019)

War on autopilot? It will be harder than the pentagon thinks (2020)

A partial ban on autonomous weapons would make everyone safer

“Drone Swarm” imagines autonomous warfare as a huge chore (2020)

Swarms Of Mass Destruction: The Case For Declaring Armed And Fully Autonomous Drone Swarms As WMD (2020)\

Autonomous Weapons: An Open Letter from AI & Robotics Researchers (2020)

Mind the Gap: The Lack of Accountability for Autonomous Robots (2015)

Losing Humanity: The Case Against Killer Robots (2012)

Autonomous weapons and operational risk (2016)

Kenneth Anderson, Daniel Reisner, and Matthew C. Waxman, “Adapting the Law of Armed Conflict to Autonomous Weapon Systems,” International Law Studies 90 (2014): 391–393; and Kenneth Anderson and Matthew C. Waxman, “Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can,” Jean Perkins Task Force on National Security and Law Essay Series (Stanford, Calif.: Stanford University, The Hoover Institution, April 10, 2013). AWS make war too easy

Wendell Wallach, A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control (New York: Basic Books, 2015); and Human Rights Watch, Mind the Gap: The Lack of Accountability for Killer Robots (New York: Human Rights Watch, 2015). No accountability for unethical AWS use.

Aff — Pro Ban

N. Sharkey , ‘ Saying “no!” to lethal autonomous targeting ’, Journal of Military Ethics , 9 ( 2010 ), 369 , 378 .

On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making (2016).

This article considers the recent literature concerned with establishing an
international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such a ban might be problem atic for various reasons. It argues in favour of a theoretical foundation for such a ban based on h uman rights and humanitarian principles that are not only moral, but also legal ones. In particular, an implicit requirement for human judgement can be found in international humanitarian law governing armed conflict. Indeed, this requirement is
implicit in the principles of distinction, proportionality, and military necessity that are found in international treaties, such as the 1949 Geneva Conventions, and firmly established in international customary law. Similar principles are also implicit in international human rights law, which ensures certain human rights for all people, regardless of national origins or local laws, at all times. I argue that the human rights to life and due process, and the limited conditions under which they can be overridden, imply a specific duty with respect to a broad range of automated and
autonomous technologies. In particular, there is a duty upon individuals and states in peacetime, as well as combatants, military organizations, and states in armed conflict situations, not to delegate to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy in each and every case. I argue that it would be beneficial to establish this duty as an international norm, and express this with a
treaty, before the emergence of a broad range of automated and autonomous weapons systems begin to appear that are likely to pose grave threats to the basic rights of individuals.

S. Russell , ‘ Take a stand on AI weapons ’, Nature , 521 ( 2015 ), 415 , 415 .

Affirmative — Dignity Impacts

Dignity has been called the ‘mother’ of human rights. See B. Schlink , ‘ The concept of human dignity: current usages, future discourses ’ in C. McCrudden (ed.), Understanding Human Dignity ( Oxford University Press , 2013 ), 632 . See also P. Carozza , ‘ Human dignity and judicial interpretation of human rights: a reply ’, European Journal of International Law 19 ( 5 ) ( 2008 ), 931–44 , available at https://ejil.oxfordjournals.org/content/19/5/931.full ; Nils Petersen, ‘Human dignity, international protection’, available at https://opil.ouplaw.com/view/10.1093/law:epil/9780199231690/law-9780199231690-e809?print . 23 See, e.g., the view of the International

NegativeGeneral

A Posthuman-Xenofeminist Analysis of the Discourse on Autonomous Weapons Systems and Other Killing Machines (2018). This makes the argument that autonomous weapons cannot be effectively banned because it isn’t possible to arrive at a meaningful definition.

Jean-Baptiste Jeangène Vilmer, Terminator Ethics: Should We Ban “Killer Robots”? (online) Ethics and International Affairs 2015 (accessed 26 February 2017). This also makes the argument that we can’t determine autonomous robots from non-autonomous robots.

Ron Arkin, ‘Lethal Autonomous Systems and the Plight of the non-combatant,’ (2013) AISB Quarterly 137.  Alternatively, Arkin has proposed regulation of autonomous weapons via a test.105 The Arkin test states that a machine can be employed when it can be shown that it can respect the laws of war as well or better than a human in similar circumstances.106 Some scholars have argued that if a machine passes this test, we have a moral obligation to deploy it, as it may work to uphold IHL better than ever before.107 There are many opponents to such a test. Sparrow, for bexample, has noted that Arkin’s argument ‘depends on adopting a consequentialist ethical framework that is concerned only with the reduction of civilian casualties’.108 Thus, while such a machine could be justified based on statistics, this avoids the fact that the IHL standard for the protection of civilian life should be perfection. Noting that a machine may

Autonomous killer robots are probably good news (2014). Will future lethal autonomous weapon systems (LAWS) or ‘killer robots’, be a threat to humanity? In this policy paper, we argue that they do not take responsibility away from humans; in fact they increase the ability to hold humans accountable for war crimes. Also, using LAWS in war, as compared to a war without them, would probably make wars a bit less bad through an overall reduction of human suffering, especially in civilians. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict-especially as compared to extant remote-controlled weapons. The European Parliament and a UN special rapporteur have called for a moratorium or ban of LAWS, supported by the vast majority of writers and campaigners on the issue. The ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban. However, the main arguments in favour of a ban are unsound. We are afraid of killer robots, but we should not be: They are probably good news.

Autonomy in weapons systems (2017).

In “Autonomy in Weapon Systems,” the U.S. sets out its assessment of a legal review of weapons with autonomous functions, ultimately concluding that rigorous testing and sound development of weapons—while not required by international humanitarian law (IHL)—can support the implementation of IHL requirements. The U.S. takes care to highlight a distinction between targeting issues and questions of weapons being “inherently indiscriminate.” Weapons that are “inherently indiscriminate” are per se against IHL, while most targeting issues arise on a case-by-case basis and are only determinable when set against the background of a particular military operation. As Defense Department policy requires the acquisition and procurement of Defense Department weapons and weapon systems to be consistent with applicable domestic and international law, the department’s legal review of a weapon focuses on that weapon’s illegality per se. While the use of autonomy to aid in the operation of weapons is not illegal per se, it may be appropriate for programmers of weapons that use autonomy in target selection and engagement to consider programming measures that reduce the likelihood of civilian casualties. To that end, practices like those espoused in Pentagon policy, requiring autonomous and semi-autonomous weapons systems to undergo “rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E),” can help reduce the risk of unintended combat engagements.

Next, the U.S. explains how weapon systems with autonomous functions could comply with IHL principles in military operations, augmenting human assessments of IHL issues by adding the assessment of those issues by a weapon itself (through computers, software and sensors). The law of war requires that individual human beings—using the mechanism of state responsibility—ensure compliance with principles of distinction and proportionality, even when using autonomous or semi-autonomous weapon systems. By contrast, it does not require weapons—even autonomous ones—to make legal determinations; rather, weapons must be capable of being employed consistent with IHL principles. The U.S. then points to a best practice (found within Defense Department policy) for improving human-machine interfaces that assist operators in making accurate judgements: The interface between people and machines for LAWS should be readily understandable to trained operators; provide traceable feedback on system status; and provide clear procedures for trained operators to activate and deactivate system functions. Instead of replacing a human’s judgment of, and responsibility for, IHL issues, the U.S. contends that LAWS could actually improve humans’ ability to implement those legal requirements.

The U.S. then describes how autonomy in weapon systems can create more capabilities and enhance the way IHL principles are implemented. The U.S. argues that military and humanitarian interests can converge, ultimately reducing the risk weapons pose to civilians and civilian objects. For example, the U.S. discusses how the application of autonomous functions could be used to create munitions that self-deactivate or self-destruct; create more precise bombs and missiles; and allow defensive systems to select and engage enemy projectiles. While the first two examples can be used to reduce risk of harm to civilian population, the third can provide commanders more time to respond to threats. Autonomous functions could allow for increased operational efficiency and a more precise application of force.

Finally, the U.S. characterizes the framework of legal accountability for weapons with autonomous functions. The U.S. first notes that states are responsible for the use of weapons with autonomous functions through the individuals in their armed forces, and that they can use investigations, individual criminal liability, civil liability, and internal disciplinary measures to ensure accountability. The U.S. next explains how persons are responsible for individual decisions to use weapons with autonomous functions, though issues normally present only in the use of weapon systems are now also present in the development of such systems. The upshot of this issue spread is that people who engage in wrongdoing in weapon development and testing could be held accountable. The U.S. states that the standard of care due to civilian protection must be assessed based on general state practice and common operational standards of the military profession. Finally, the U.S. observes that decision-makers must generally be judged based on the information available to them at the time; thus, training on, and rigorous testing of, those weapons—as described in Defense Department policy—can help promote good decision-making and accountability.

Characteristics of lethal autonomous weapons systems (2017).

In “Characteristics of Lethal Autonomous Weapons Systems,” the U.S. explains why it is unnecessary for the GGE to adopt a specific definition of LAWS. The U.S. contends that IHL provides an adequate system of regulation for weapon use and that the GGE can understand the issues LAWS pose with a mere understanding of LAWS’ characteristics, framed by a discussion of the CCW’s object and purpose. The U.S. further notes that the development of a definition—with a view toward describing the weapons to be banned—would be premature and counterproductive, diverting time that should be spent understanding issues to negotiating them.

In support of its idea that identification of characteristics of LAWS would promote a better understanding of them, the U.S. frames the way these characteristics should be set out. The U.S. asserts that characteristics of LAWS should be intelligible to all relevant audiences; not identified based on specific technological assumptions, such that those characteristics could be rendered obsolete by technological development; and not defined based on the sophistication of the machine intelligence. The U.S. articulates how focusing on sophistication of machine reasoning stimulates unwarranted fears. Instead, the U.S. stresses that these characteristics should focus on how humans will use the weapon and what they expect it to do.

Finally, the U.S. offers some internal Pentagon definitions to describe autonomy in weapon systems. Though created after considering existing weapon systems, the U.S. offers these definitions for the GGE’s consideration, as they focus on what the U.S. believes is the most important issue posed by autonomy in weapon systems: people who use the weapons can rely on them to select and engage targets. One of the definitions the U.S. offers is that for “autonomous weapon system,” which it defines as “[a] weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”

US Statement on autonomous weapons systems (2017) “It remains premature, however, to consider where these discussions might or should ultimately lead. For this reason, we do not support the negotiation of a political or legally binding document at this time.  The issues presented by LAWS are complex and evolving, as new technologies and their applications continue to be developed.  We must be cautious not to make hasty judgments about the value or likely effects of emerging or future technologies.  As history shows, our views of new technologies may change over time as we find new uses and ways to benefit from advances in technology.  In particular, we want to encourage innovation and progress in furthering the objects and purposes of the Convention.  We should therefore proceed with deliberation and patience.”


As I discussed in a previous post, the Convention on Certain Conventional Weapons Group of Governmental Experts (GGE) on lethal autonomous weapons systems (LAWS) is meeting for the second time to discuss emerging issues in the area of LAWS. The discussions are grounded in four overarching issues:

  1. characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the CCW;
  2. further consideration of the human element in the use of lethal force, aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems;
  3. review of potential military applications of related technologies in the context of the Group’s work;
  4. and possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of LAWS in the context of the objectives and purposes of the CCW without prejudicing policy outcomes and taking into account past, present and future proposals.

As two of the countries reportedly investing in developing LAWS, the U.S. and the U.K. are ones to watch at this week’s conference. For over a century, the U.S. and the U.K. have maintained a “special relationship,” one that “has done more for the defense and future of freedom than any other alliance in the world.” With one part of the special relationship being the closeness of the U.S. and U.K. militaries, it is unsurprising that the U.S. and the U.K. have much the same opinion when it comes to LAWS: It is too early for a prohibition. For the U.S., the reasoning is grounded in concrete examples of how past “autonomy-related” technologies have advanced civilian protection. Alternatively, the U.K. is wary based on a skepticism that LAWS will ever exist.

This post will outline the U.S. and U.K. positions on LAWS, as advanced by working papers, statements and policies.

U.S. Position on LAWS

Ahead of the first GGE meeting on LAWS, the U.S. submitted two working papers: “Autonomy in Weapon Systems” and “Characteristics of Lethal Autonomous Weapons Systems.” The U.S. also made three publicly available statements at that meeting: an opening statement, a statement on appropriate levels of human judgment, and a statement on the way forward. Similarly, the U.S. published a third working paper, Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems, in advance of the second GGE. The U.S. position as conveyed in its published statements is that there are potential humanitarian and military benefits of the technology behind LAWS, so it is premature to ban them.

Defense Science Board Task Force: Aiming for Future Conflicts (2018)

Working Paper: “Autonomy in Weapon Systems”

In “Autonomy in Weapon Systems,” the U.S. sets out its assessment of a legal review of weapons with autonomous functions, ultimately concluding that rigorous testing and sound development of weapons—while not required by international humanitarian law (IHL)—can support the implementation of IHL requirements. The U.S. takes care to highlight a distinction between targeting issues and questions of weapons being “inherently indiscriminate.” Weapons that are “inherently indiscriminate” are per se against IHL, while most targeting issues arise on a case-by-case basis and are only determinable when set against the background of a particular military operation. As Defense Department policy requires the acquisition and procurement of Defense Department weapons and weapon systems to be consistent with applicable domestic and international law, the department’s legal review of a weapon focuses on that weapon’s illegality per se. While the use of autonomy to aid in the operation of weapons is not illegal per se, it may be appropriate for programmers of weapons that use autonomy in target selection and engagement to consider programming measures that reduce the likelihood of civilian casualties. To that end, practices like those espoused in Pentagon policy, requiring autonomous and semi-autonomous weapons systems to undergo “rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E),” can help reduce the risk of unintended combat engagements.

Next, the U.S. explains how weapon systems with autonomous functions could comply with IHL principles in military operations, augmenting human assessments of IHL issues by adding the assessment of those issues by a weapon itself (through computers, software and sensors). The law of war requires that individual human beings—using the mechanism of state responsibility—ensure compliance with principles of distinction and proportionality, even when using autonomous or semi-autonomous weapon systems. By contrast, it does not require weapons—even autonomous ones—to make legal determinations; rather, weapons must be capable of being employed consistent with IHL principles. The U.S. then points to a best practice (found within Defense Department policy) for improving human-machine interfaces that assist operators in making accurate judgements: The interface between people and machines for LAWS should be readily understandable to trained operators; provide traceable feedback on system status; and provide clear procedures for trained operators to activate and deactivate system functions. Instead of replacing a human’s judgment of, and responsibility for, IHL issues, the U.S. contends that LAWS could actually improve humans’ ability to implement those legal requirements.

The U.S. then describes how autonomy in weapon systems can create more capabilities and enhance the way IHL principles are implemented. The U.S. argues that military and humanitarian interests can converge, ultimately reducing the risk weapons pose to civilians and civilian objects. For example, the U.S. discusses how the application of autonomous functions could be used to create munitions that self-deactivate or self-destruct; create more precise bombs and missiles; and allow defensive systems to select and engage enemy projectiles. While the first two examples can be used to reduce risk of harm to civilian population, the third can provide commanders more time to respond to threats. Autonomous functions could allow for increased operational efficiency and a more precise application of force.

Finally, the U.S. characterizes the framework of legal accountability for weapons with autonomous functions. The U.S. first notes that states are responsible for the use of weapons with autonomous functions through the individuals in their armed forces, and that they can use investigations, individual criminal liability, civil liability, and internal disciplinary measures to ensure accountability. The U.S. next explains how persons are responsible for individual decisions to use weapons with autonomous functions, though issues normally present only in the use of weapon systems are now also present in the development of such systems. The upshot of this issue spread is that people who engage in wrongdoing in weapon development and testing could be held accountable. The U.S. states that the standard of care due to civilian protection must be assessed based on general state practice and common operational standards of the military profession. Finally, the U.S. observes that decision-makers must generally be judged based on the information available to them at the time; thus, training on, and rigorous testing of, those weapons—as described in Defense Department policy—can help promote good decision-making and accountability.

Working Paper: “Characteristics of Lethal Autonomous Weapons Systems”

In “Characteristics of Lethal Autonomous Weapons Systems,” the U.S. explains why it is unnecessary for the GGE to adopt a specific definition of LAWS. The U.S. contends that IHL provides an adequate system of regulation for weapon use and that the GGE can understand the issues LAWS pose with a mere understanding of LAWS’ characteristics, framed by a discussion of the CCW’s object and purpose. The U.S. further notes that the development of a definition—with a view toward describing the weapons to be banned—would be premature and counterproductive, diverting time that should be spent understanding issues to negotiating them.

In support of its idea that identification of characteristics of LAWS would promote a better understanding of them, the U.S. frames the way these characteristics should be set out. The U.S. asserts that characteristics of LAWS should be intelligible to all relevant audiences; not identified based on specific technological assumptions, such that those characteristics could be rendered obsolete by technological development; and not defined based on the sophistication of the machine intelligence. The U.S. articulates how focusing on sophistication of machine reasoning stimulates unwarranted fears. Instead, the U.S. stresses that these characteristics should focus on how humans will use the weapon and what they expect it to do.

Finally, the U.S. offers some internal Pentagon definitions to describe autonomy in weapon systems. Though created after considering existing weapon systems, the U.S. offers these definitions for the GGE’s consideration, as they focus on what the U.S. believes is the most important issue posed by autonomy in weapon systems: people who use the weapons can rely on them to select and engage targets. One of the definitions the U.S. offers is that for “autonomous weapon system,” which it defines as “[a] weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”

Opening Statement at First GGE

In its opening statement delivered by State Department lawyer Charles Trumbull, the U.S. emphasized the importance of the weapon review process in the development and acquisition of new weapon systems. The U.S. also asserted that it continues to believe advances in autonomy could facilitate and enhance the implementation of IHL—particularly the issues of distinction and proportionality. Noting that “[i]t remains premature … to consider where these discussions might or should ultimately lead,” the U.S. stated that it does not support the negotiation of a political or legally binding document at this time.

Statement at First GGE: Intervention on Appropriate Levels of Human Judgment over the Use of Force

In the second statement delivered by Lieutenant Colonel John Cherry, Judge Advocate, U.S. Marine Corps, the U.S. answered one of the questions Chairman Amandeep Singh Gill posed in his food-for-thought paper: Could potential LAWS be accommodated under existing chains of military command and control? The U.S. responded in the affirmative, detailing how commanders currently authorize the use of lethal force, based on indicia like the commander’s understanding of the tactical situation, the weapon’s system performance, and the employment of tactics, techniques and procedures for that weapon. Understanding that states will not develop and field weapons they cannot control, the U.S. urged a focus on “appropriate levels of human judgment over the use of force,” rather than the controllability of the weapon system. The U.S. contended that a focus on the level of human judgment is appropriate because it both centers on the human beings to whom IHL applies, and reflects the fact that there is not a fixed, one-size-fits-all level of human control that should be applied to every weapon system.

Statement at First GGE on the Way Forward

In its last statement delivered by State Department lawyer Joshua Dorosin, the U.S. reiterated its support of a continued discussion of LAWS within the CCW. It further cautioned that it is premature to negotiate a political document or code of conduct when states lack a shared understanding of the fundamental LAWS-related issues. As emphasized in its working paper, the U.S. urged states to establish a working understanding of the common characteristics of LAWS. Finally, the U.S. stated its support for further discussions on human control, supervision or judgment over decisions to use force, as well as state practice on weapons reviews.

Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems, (2017). In this paper, the U.S. examines how autonomy-related technologies might enhance civilian protection during armed conflict. The U.S. explores the ways in which emerging technologies in LAWS could enhance civilian protection. It contends that state practice shows five ways that civilian casualties might be reduced through use of LAWS: through incorporating autonomous self-destruct, self-deactivation, or self-neutralization mechanisms; increasing awareness of civilians and civilian objects on the battlefield; improving assessments of the likely effects of military operations; automating target identification, tracking, selection, and engagement; and reducing the need for immediate fires in self-defense. After discussing each way that emerging technologies in the area of LAWS have the potential to reduce civilian casualties and damage to civilian objects, the U.S. concludes that states should “encourage such innovation that furthers the objectives and purposes of the Convention,” instead of banning it.

The U.S. first discusses examples of weapons with electronic self-destruction mechanisms and electronic self-deactivating features that can help avoid indiscriminate area effects and unintended harm to civilians or civilian objects. The U.S. also points to weapons systems, such as anti-aircraft guns, that use self-destructing ammunition; this ammunition destroys the projectile after a certain period of time, diminishing the risk of inadvertently striking civilians and civilian objects. Though the mechanisms are not new, more sophisticated mechanisms such as these often accompany advances in weaponry. Just as Defense Department policy dictates that measures be taken to ensure autonomous or semi-autonomous weapon systems “complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so … terminate engagements or seek additional human operator input before continuing the engagement,” so too could future LAWS be required to incorporate self-destruct, self-deactivation, or self-neutralization measures.

Second, the U.S. underscores the way that artificial intelligence (AI) could help commanders increase their awareness of the presence of civilians, civilian objects and objects under special protection on the battlefield, clearing the “fog of war” that sometimes causes commanders to misidentify civilians as combatants or be unaware of the presence of civilians in or near a military objective. AI could enable commanders to sift through the overwhelming amount of information to which they might have access during military operations—like hours of intelligence video—more effectively and efficiently than humans could do on their own. In fact, Pentagon is currently using AI to identify objects of interest from imagery autonomously, allowing analysts to focus instead on more sophisticated tasks that do require human judgment. The increased military awareness of civilians and civilian objects could not only help commanders better assess the totality of expected incidental loss of civilian life, injury to civilians, and damage to civilian objects from an attack, but could also help commanders identify and take additional precautions.

Third, the U.S. discusses how AI could improve the process of assessing the likely effects of weapon systems, with a view toward minimizing collateral damage. With the U.S. already using software tools to assist in these assessments, more sophisticated computer modelling could allow military planners to assess the presence of civilians or effects of a weapon strike more quickly and more often. These improved assessments could, in turn, help commanders identify and take additional precautions, while offering the same or a superior military advantage in neutralizing or destroying a military objective.

Fourth, the U.S. points out how automated target identification, tracking, selection and engagement functions can reduce the risk weapons pose to civilians, and allow weapons to strike military objectives more accurately. The U.S. lists a number of weapons—including the AIM-120 Advanced Medium-Range, Air-to-Air Missile (AMRAAM); the GBU-53/B Small Diameter Bomb Increment II (SDB II); and the Common Remotely Operated Weapon Station (CROWS)—that utilize autonomy-related technology to strike military objectives more accurately and with less risk of harm to civilians and civilian objects. The U.S. contends that those examples illustrate the potential of emerging technologies in LAWS to reduce the risk to civilians in applying force.

Finally, the U.S. engages with ways that emerging technologies could reduce the risk to civilians when military forces are in contact with the enemy and applying immediate use of force in self-defense. The U.S. contends that the use of autonomous systems can reduce human exposure to hostile fire, thereby reducing the need for immediate fires in self-defense. It points to the way that remotely piloted aircraft or ground robots can scout ahead of forces, enable greater standoff distance from enemy formations, and thus allow forces to exercise tactical patience, reducing the risk of civilian casualties. The U.S. also notes how technologies—like the Lightweight Counter Mortar Radar—can automatically detect and track shells and backtrack to the position of the weapon that fired the shell; this and similar technologies can be used to reduce the risk of misidentifying the location or source of enemy fire. The U.S. lastly asserts that defensive autonomous weapons—like the Counter-Rocket, Artillery, Mortar (C-RAM) Intercept Land-Based Phalanx Weapon System—can counter incoming rockets, mortars, and artillery, allowing additional time to respond to an enemy threa

Negative — AWS Allows Better Targeting

See, e.g., B. J. Strawser , Killing by Remote Control: The Ethics of an Unmanned Military ( Oxford University Press , 2013 ), 17 . See also M. Horowitz and P. Scharre, ‘Do killer robots save lives?’ available at www.politico.com/magazine/story/2014/11/killer-robots-save-lives-113010.html .

R. C. Arkin, Lethal autonomous weapons systems and the plight of the non-combatant’ (2014)

Do Killer Robots Save Lives? (2015)

Ronald C. Arkin, Governing Lethal Behavior in Autonomous Robots (Boca Raton, Fla.: CRC Press, 2009). AWS don’t kill out of anger and emotion.

Negative — Regulations CP

Future Warfare and the Decline of Human Decisionmaking

International Governance of Autonomous Military Robots

Negative — Add in Human Component Counterplan

Governing Lethal Behavior: Embedding Ethics in a Hybrid
Deliberative/Reactive Robot Architecture
(2016)

This article provides the basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement. It is based upon extensions to existing deliberative/reactive autonomous robotic architectures, and includes recommendations for (1) post facto suppression of unethical behavior, (2) behavioral design that incorporates ethical constraints from the onset, (3) the use of affective functions as an adaptive component in the
event of unethical action, and (4) a mechanism in support of identifying and advising operators regarding the ultimate responsibility for the deployment of such a system.

Counterplan — Laws of War

Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can (2014)

Counterplan — Arkin Test

Ron Arkin, ‘Lethal Autonomous Systems and the Plight of the non-combatant,’ (2013) AISB Quarterly 137.  Alternatively, Arkin has proposed regulation of autonomous weapons via a test.105 The Arkin test states that a machine can be employed when it can be shown that it can respect the laws of war as well or better than a human in similar circumstances.106 Some scholars have argued that if a machine passes this test, we have a moral obligation to deploy it, as it may work to uphold IHL better than ever before.107 There are many opponents to such a test. Sparrow, for bexample, has noted that Arkin’s argument ‘depends on adopting a consequentialist ethical framework that is concerned only with the reduction of civilian casualties’.108 Thus, while such a machine could be justified based on statistics, this avoids the fact that the IHL standard for the protection of civilian life should be perfection. Noting that a machine may

Terrorism

Terrorist groups, artificial intelligence, killer drones. New technologies have always been a critical component of military strategy and preparedness. One new technology on the not-too-distant technological horizon is lethal autonomous robotics, which would consist of robotic weapons capable of exerting lethal force without human control or intervention. There are a number of operational and tactical factors that create incentives for the development of such lethal systems as the next step in the current development, deployment and use of autonomous systems in military forces. Yet, such robotic systems would raise a number of potential operational, policy, ethical and legal issues. This article summarizes the current status and incentives for the development of lethal autonomous robots, discusses some of the issues that would be raised by such systems, and calls for a national and international dialogue on appropriate governance of such systems before they are deployed. The article reviews potential modes of governance, ranging from ethical principles implemented through modifications or refinements of national policies, to changes in the law of war and rules of engagement, to international treaties or agreements, or to a variety of other “soft law” governance mechanisms.