Lethal Autonomous Weapon Systems and Automation Bias
Autonomy in weapon systems is already a genuine concern. States try to come up with their own definitions of these systems and pay utmost effort to impose their own understanding of these systems upon other states. For a fairly high number of states barring a total ban on such weapons would be the ideal solution; however, such states that are anxious about the increase in autonomy in war-making capabilities, as adopts a second-best scenario to contain risks created by the deployment of such systems. To this end, placing them under meaningful human control emerges as an important political and legal objective. The author believes that placing autonomous weapons under human supervision, despite its initial promise, will yield negative results. This is due to the fact that humans tend rather to be too willing to follow the solutions generated by autonomous systems. First observed in other industries of civilian nature like aviation or health, automation bias has the potential to negate most if not all of supervision measures expected to ensure proper implementation of international humanitarian law.
Downloads
Introduction
Algorithmic decision-making is becoming common in many societal domains, such as the military, criminal justice system, and law enforcement, and this raises serious ethical concerns[23]. This paper aims to shed some light onto one of these concerns, which is a very important human trait with grave possible consequences. This human tendency is the so-called automation bias. First detected in civilian sectors, this bias can briefly be defined as a human inclination to depend too heavily on and to believe in information from autonomous systems, even when contradicting or differing information from other sources is available or could easily be found with the right search[16].
In the upcoming years, one feels justified to take for granted that systems with different levels of autonomy will serve various objectives during armed conflicts[45]. Since the autonomous weapon systems seem to be the new weapons of choice in warfare[23], this bias will have warfare-related impacts as well. This impact first and foremost jeopardizes a proper implementation of IHL, which is undoubtedly essential for protecting civilians and non-military objects.
This paper is an endeavor to elaborate an innate human tendency and its impact on machine-human interaction. It will first set out to present the development of debate on autonomy in weaponry. Following that, a definition will be presented for the purposes of this paper. Human-machine interaction shall be then briefly investigated. Then, the author will elaborate on the concept of meaningful human concept. Finally, the author aims to offer a sounder opinion as to how a satisfying degree of control can, if ever, be formed.
It is the fundamental stance of this paper that overly optimistic assumptions as to the reliability of autonomous weapons and the feasibility of a meaningful human control over them may prove insubstantial in the face of many factors, including automation bias. This bias carries in itself the risk of turning human operators into complete automatons. The existence of a supervising human operator who is merely accepting the automated solutions without ever bothering to question them would actually come to mean that there is no real human control left over the LAWS[42]. This insufficient level of oversight and supervision comes in essence to mean nothing but a very shaky ground for the protection of humanitarian values as well as the opening of the gates of impunity for transgressions like war crimes. Being cognizant of this alarming fact, the author believes that one safe way to secure humanity against autonomous weapons may be to introduce a comprehensive restraint or a ban on the development and deployment of such systems and weapons.
Chronological Development of the LAWS Problematique
A specter has been haunting the diplomatic negotiations and academic discussions for a long time now and it is the tormenting problem of how humanity will cope with the so-called lethal autonomous weapon systems (LAWS). This has actually been a hot topic since the beginning of the 2010’s[7]. A significant number of individuals who are very knowledgeable in robotics or international law contested the very idea of developing such systems that can decide on their own to kill human beings, showing no remorse for or even no appreciation of the consequences of their lethal solutions.
Another group of pundits defended the development of such systems on the grounds that these systems lacked the deficiencies and weaknesses human beings are prone to. In this line of thinking, LAWS embody asupermensch on dieselwhich will keep its mandate as programmed and discharge its soldierly obligations without the meddling of any humane conditions or vulnerabilities like exhaustion, rage or revenge[46].
There is now a large body of literature on the issue with a view to especially probing into their compatibility with international humanitarian law and, to a lesser extent, international human rights law. This paper does not aim to add anything to the already existing body of works on these crucial points, rather it deals mainly with automation bias and its effects in relation with autonomous weapon systems. It will endeavor to elaborate on the question of whether meaningful human control is ever attainable. This matter of control over LAWS has assumed a central position not only in diplomatic discussions but also in academic works. For the sake of this paper, the author will use the LAWS definition by the US Department of Defense[20]. According to this, LAWS are such weapons systems, which ‘...once activated, can select and engage targets without further intervention by a human operator’, which is a commonly adopted definition[57].
First Debates about LAWS
In 2013 the topic of LAWS first made it to the top the international political agenda and has so far kept that hot agenda status[21]. Since 2014 experts have been meeting to discuss the possible legal or political reactions on behalf of states and their ramifications at large[4]. First kickstarted as a non-governmental meeting and then transformed into a full-blown governmental experts’ get-together in 2017[41], these talks have been a great learning experience for the young and the uninitiated in terms of how international law-making among states proceeds. Diplomats, concerned individuals, learned academics and international NGOs have offered additional insights during these talks. These contributions not only facilitated a larger dissemination of the ideas of those individuals but also played an influential role in the improvement
Alston P, ‘Interim Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions’ (United Nations Human Rights Council 2010) UN Doc A/65/32.
Amoroso D and Tamburrini G, ‘The Ethical and Legal Case Against Autonomy in Weapons Systems’ (2018) 18 <https://doi.org/10.1515/gj-2017-0012> accessed 21 September 2023.
——, ‘Toward a Normative Model of Meaningful Human Control over Weapons Systems’ (2021) 35 Ethics & International Affairs.
Anderson K, Reisner D and Waxman M, ‘Adapting the Law of Armed Conflict to Autonomous Weapon Systems’ (2014) 90 International Law Studies.
Arkin RC, Governing Lethal Behavior in Autonomous Robots (CRC Press 2009)
——, ‘The Case for Ethical Autonomy in Unmanned Systems’ (2010) 9 Journal of Military Ethics.
Asaro P, ‘On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making’ (2012) 94 International Review of the Red Cross.
AU - Misselhorn C, ‘Autonome Waffensysteme/Kriegsroboter’, Handbuch Maschinenethik (Springer VS, Springer Fachmedien Wiesbaden 2019) <https://www.researchgate.net/publication/336597731_Autonome_WaffensystemeKriegsroboter>.
Badell D and Schmitt L, ‘Contested Views? Tracing European Positions on Lethal Autonomous Weapon Systems’ (2022) 31 European Security.
Bainbridge, ‘Ironies of Automation’ (1983) 19 Automatica.
Beck S, ‘Der Rechtliche Status Autonomer Maschinen’ (2017) 26 PJA/AJP.
Boni M, ‘The Ethical Dimension of Human–Artificial Intelligence Collaboration’ (2021) 20 European View.
Chengeta T, ‘Defining the Emerging Notion of “Meaningful Human Control” in Weapon Systems’ (2017) 49 New York University Journal of International Law and Politics.
Chetail V, ‘The Fundamental Principles of Humanitarian Law through the Case Law of the International Court of Justice’ (2002) 21 Refugee Survey Quarterly <https://repository.graduateinstitute.ch/record/5034/files/Refugee%20Survey%20Quarterly-2002-Chetail-199-211.pdf>.
Christie EH and others, ‘Regulating Lethal Autonomous Weapon Systems: Exploring the Challenges of Explainability and Traceability’ [2023] AI and Ethics <https://doi.org/10.1007/s43681-023-00261-0>.
Coco A, ‘Exploring the Impact of Automation Bias and Complacency on Individual Criminal Responsibility for War Crimes’ [2023] Journal of International Criminal Justice mqad.
Crootof R, ‘A Meaningful Floor for “Meaningful Human Control”’ (2016) 30 Temple International and Comparative Law Journal.
Cummings M, ‘Automation Bias in Intelligent Time Critical Decision Support Systems’, AIAA 1st Intelligent Systems Technical Conference (American Institute of Aeronautics and Astronautics 2004) <https://doi.org/10.2514/6.2004-6313> accessed 22 September 2023.
Dahlmann A, Hoffberger-Pippan E and Wachs L, ‘Autonome Waffensysteme und menschliche Kontrolle - Konsens über das Konzept, Unklarheit über die Operationalisierung’.
‘Department of Defense Directive 3000.09’ (Department of Defense 2012).
Ekelhof M, ‘Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation’ (2019) 10 Global Policy.
Ekelhof MAC, ‘Lifting the Fog of Targeting’ (2018) 71 Naval War College Review.
Garcia D, ‘Algorithms and Decision-Making in Military Artificial Intelligence’ [2023] Global Society.
George Jain A, ‘Autonomous Weapon Systems, Errors and Breaches of International Humanitarian Law’ [2023] Journal of International Criminal Justice mqad.
Gómez de Ágreda Á, ‘Ethics of Autonomous Weapons Systems and Its Applicability to Any AI Systems’ (2020) 44 Artificial intelligence, economy and society.
Haner J and Garcia D, ‘The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development’ (2019) 10 Global Policy.
Heyns C, ‘Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions’ (United Nations Human Rights Council 2013) UN Doc A/HRC/23/47.
Hoffman RR and others, ‘Myths of Automation and Their Implications for Military Procurement’ (2018) 74 Bulletin of the Atomic Scientists.
ICRC, ‘ICRC Position on Autonomous Weapon Systems: ICRC Position and Background Paper’ (ICRC 2021).
Johnson J, ‘Finding AI Faces in the Moon and Armies in the Clouds: Anthropomorphising Artificial Intelligence in Military Human-Machine Interactions’ Global Society (2023)’ [2023] Global Society.
Kirlik A, ‘Modeling Strategic Behavior in Human-Automation Interaction: Why an “Aid” Can (and Should) Go Unused’ (1993) 35 Human Factors.
Klare MT, ‘The Challenges of Emerging Technologies’ [2018] Arms Control Today.
Krishnan A, ‘Enforced Transparency: A Solution to Autonomous Weapons as Potentially Uncontrollable Weapons Similar to Bioweapons’ in Jai Galliott, Duncan MacIntosh and Jens David Ohlin (eds), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare (Oxford University Press 2021) <https://doi.org/10.1093/oso/9780197546048.003.0015> accessed 18 July 2023.
Ma E, ‘Autonomous Weapons Systems Under International Law’ (2020) 95 New York University Law Review.
McFarland T, Autonomous Weapon Systems and the Law of Armed Conflict: Compatibility with International Humanitarian Law (Cambridge University Press 2020) <https://www.cambridge.org/core/books/autonomous-weapon-systems-and-the-law-of-armed-conflict/09BFF6BB5B88E34935678B5A0606A8A7>.
Methnani L and others, ‘Let Me Take Over: Variable Autonomy for Meaningful Human Control’ (2021) 4 Frontiers in Artificial Intelligence <https://www.frontiersin.org/articles/10.3389/frai.2021.737072>.
Meurant J, ‘Inter Arma Caritas: Evolution and Nature of International Humanitarian Law’ (1987) 24 Journal of Peace Research.
Mitchell C, ‘When Laws Govern LAWS: A Review of the 2018 Discussions of the Group of Governmental Experts on the Implementation and Regulation of Lethal Autonomous Weapons Systems’ (2020) 36 Santa Clara High Technology Law Journal.
Mosier KL and others, ‘Aircrews and Automation Bias: The Advantages of Teamwork?’ (2001) 11 The International Journal of Aviation Psychology.
Parasuraman R and Manzey DH, ‘Complacency and Bias in Human Use of Automation: An Attentional Integration’ (2010) 52 Human Factors.
Reeves SR, Alcala RTP and McCarthy A, ‘Challenges in Regulating Lethal Autonomous Weapons under International Law Fighting in the Law’s Gaps’ (2021) 27 Southwestern Journal of International Law.
Roff H and Moyes RM, ‘Meaningful Human Control, Artificial Intelligence and Autonomous Weapons’ (Article 36 2016) Briefing Paper <https://article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf>.
Rosendorf O, Smetana M and Vranka M, ‘Autonomous Weapons and Ethical Judgments: Experimental Evidence on Attitudes toward the Military Use of “Killer Robots”.’ (2022) 28 Peace and Conflict: Journal of Peace Psychology.
Rosert E and Sauer F, ‘Prohibiting Autonomous Weapons: Put Human Dignity First’ (2019) 10 Global Policy.
Santoni de Sio F and van den Hoven J, ‘Meaningful Human Control over Autonomous Systems: A Philosophical Account’ (2018) 5 Frontiers in Robotics and AI <https://www.frontiersin.org/articles/10.3389/frobt.2018.00015>.
Sati MC, ‘The Attributability of Combatant Status to Military AI Technologies under International Humanitarian Law’ [2023] Global Society.
Schaub H, ‘Der Einsatz Autonomer Waffensysteme Aus Psychologischer Perspektive’ (2020) Volume 13 Zeitschrift für Außen- und Sicherheitspolitik.
Sehoon Park, ‘Analysis of the Positions Held by Countries on Legal Issues of Lethal Autonomous Weapons Systems and Proper Domestic Policy Direction of South Korea’ (2020) 32 The Korean journal of defense analysis.
Seixas-Nunes A, The Legality and Accountability of Autonomous Weapon Systems: A Humanitarian Law Perspective (Cambridge University Press 2022) <https://www.cambridge.org/core/books/legality-and-accountability-of-autonomous-weapon-systems/FE880FD3F459B29A495D79D0C8347D79>.
Skitka LJ, Mosier KL and Burdick M, ‘Does Automation Bias Decision-Making?’ (1999) 51 International Journal of Human-Computer Studies.
Solovyeva A and Hynek N, ‘Going Beyond the" Killer Robots" Debate: Six Dilemmas Autonomous Weapon Systems Raise.’ (2018) 12 Central European Journal of International & Security Studies.
Sparrow R, ‘Killer Robots’ (2007) 24 Journal of Applied Philosophy.
Strauß S, ‘Deep Automation Bias: How to Tackle a Wicked Problem of AI?’ (2021) 5 Big Data and Cognitive Computing.
Szpak A, ‘Legality of Use and Challenges of New Technologies in Warfare – the Use of Autonomous Weapons in Contemporary or Future Wars’ (2020) 28 European Review.
Vagle JL, ‘Tightening the OODA Loop: Police Militarization, Race, and Algorithmic Surveillance’ (2016) 22 Michigan Journal of Race and Law.
Verdiesen I, Santoni de Sio F and Dignum V, ‘Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight’ (2021) 31 Minds and Machines.
Weigend T, ‘Convicting Autonomous Weapons?: Criminal Responsibility of and for AWS under International Law’ [2023] Journal of International Criminal Justice.
Wood NG, ‘Autonomous Weapon Systems and Responsibility Gaps: A Taxonomy’ (2023) 25 Ethics and Information Technology.
Woods DD, ‘The Risks of Autonomy: Doyle’s Catch’ (2016) 10 Journal of Cognitive Engineering and Decision Making.
Copyright (c) 2024 Gökhan Güneysu

This work is licensed under a Creative Commons Attribution 4.0 International License.