Autonomous Weapons Systems: Using Causal Layered Analysis to Unpack AWS

Article

Nicholas Marsh1*
1MSc in International Relations, Finland.

Abstract

Autonomous weapons systems (AWSs) are technologically advanced armed machines which, to varying levels, are capable of operating independent of human control. AWSs typically include flying drones, ships, or ground vehicles with integrated weapons. Using technology, several machines already exist capable of varying levels of autonomy, including seaworthy ships and airborne drones. Inter-polity competition and fast-paced commercial developments in the field of robotics and artificial intelligence are among the many incentives driving military stakeholders to advocate for AWSs. They see it as imperative not to be left behind in robotics and artificial intelligence development and see AWS technology as an advantage in future conflicts. Another perspective is driven by human rights campaigners and international organizations, as they voice their concerns over AWS technology. War and conflict are seen as too chaotic and ambiguous for the deployment of AWS technology. In fact, they see AWS technology as potentially lowering barriers to conflict itself. Using causal layered analysis, this paper aims to unpack the phenomenon of AWSs and two contrasting world views surrounding the topic. Artificial intelligence and AWS technology is seen by military stakeholders as an opportunity to gain the higher ground over adversaries. However, the idea of a higher ground is likely to be unsustainable due to the challenging, ecological nature of technological innovation. Technological innovation can be characterised by a two-way street (where one develops the aircraft, another develops anti-aircraft weapons). The opposite side of the ‘killer robot’ debate sees the dangers of developing self-thinking and self-acting machines capable of using lethal force on human beings. The technology is seen as fundamentally incapable and immature for deployment in wars in ways adhering to laws of war and ethics. If the future holds a transition of AWSs being ‘tools’ to being ‘agents,’ this may briefly satisfy military stakeholders’ idea of a higher ground. However, AWSs can also prove to be an unreliable friend, as the opposing view seems to imply.

Keywords

Autonomous Weapons Systems, AWS, Causal Layered Analysis, Futures, Conflict

Introduction

The autonomous weapons system (AWS), and its primary driver artificial intelligence (AI), has generated substantial interest over several fields including defence, the commercial sector, human rights, and law. The complexity surrounding the use of artificial intelligence to delegate decision-making to machines on the battlefield has ignited a multi-stakeholder debate on the use of so-called ‘killer robots’ in conflict settings. While developments have been made in this field, fully independent AWSs have yet to reach the maturity required for deployment, when taking ambiguity into account as featured in military operations.

Situational ambiguity leads to ethical and legal considerations, both important aspects of armed conflict. The main consideration in this debate is the willingness to delegate the act of violence to a programmable machine which lacks the ability for subjective decision-making. As technology evolves, advancements in new applications are projected to allow a deeper level of independence from human control. Increasing this independence leads to a paradigm shift from a machine being a ‘tool’ to it being an ‘agent’. Whether discussing unmanned cars, airplanes, or ships, each capable of independently navigating different kinds of environments, a significant question arises when applied to warfare and lethal force. Can autonomous machines be trusted with executing a mission involving lethal force without violating rules of engagement, laws of war or ethical boundaries?

Looking to the future, causal layered analysis (CLA) offers a purposeful way of unpacking this issue. While the focus of CLA is to drill into the myths and world views relating to a particular topic, going more in-depth into an issue beyond the visible data and its historical connections, CLA also offers a broader view of the topic by tapping into the perspectives of other stakeholders (Inayatullah, S. 2017). This is a snapshot view of the issue of AWSs in military context, utilising CLA to unpack the phenomenon.

The Litany

Autonomous weapon systems (AWSs), also called ‘lethal autonomous weapon systems’ (LAWSs), use artificial intelligence to function independently of human control. Using algorithms and different techniques to harness substantial computer power for task- and problem-solving, AWS technology can potentially reach the capacity to perform missions without any meaningful human control.

AWSs, driven by artificial intelligence, typically use cameras, sensors, and microphones to perceive their surroundings in order to create a world model which frames decision-making processes (Cummings, 2017). The European Defence Agency defines the broad, encompassing phenomenon of artificial intelligence as the:

“(…) capability provided by algorithms of selecting, optimal or sub-optimal choices from a wide possibility space, in order to achieve specific goals by applying different strategies including adaptivity to the surrounding dynamical conditions and learning from own experience, externally supplied or self-generated data” (European Defence Agency, para 22, 2020).

AWS technology aspires to adopt elements from human intelligence, including reasoning, perception, and cognition. However, AWS technology is far from an imitation of human intelligence. The complexity affecting decision-making varies widely according to environment and situation: a flying autonomous drone has less factors to consider navigating the skies if compared to a driverless car in an urban environment. A flying machine must acknowledge height obstacles and no-fly zones to follow preferred routes, where a driverless car needs to consider other nearby vehicles, pedestrians, traffic signs among other variables to safely navigate its environment (Cummings, 2017). Expanding this example to a conflict setting involving serious ambiguity (considering for example a counter-insurgency operation where distinguishing civilian from foe is difficult), little is being said on what kind of emotional intelligence features, policy or ethical considerations, shape decision-making on using lethal force. Flying an aircraft is an example of what a report by Chatham House calls skills-based decision-making. As decision-making becomes increasingly complicated, rules-based (the checklist), knowledge-based and expertise-based behaviours become essential (the latter two considered vital for situations involving high level of uncertainty) (Cummings, 2017).

Militaries across rival polities are researching new technologies to keep ahead of their adversaries. For some, AWS technology could provide a cheaper alternative for executing missions and smaller operations (in financial and political terms). The United States Navy has been developing a ship capable of navigating the sea for months without a human crew, dubbed ACTUV (‘Anti-Submarine Warfare Continuous Trail Unmanned Vehicle’), potentially signalling a transition from a smaller number of large and expensive crewed ships to a larger number of small manned and unmanned ships (Klare, 2019). Being comparatively cheap, able to operate 24 hours per day, and reducing battlefield casualties, AWSs offer considerable potential advantages over crewed vehicles. Similar developments in AWSs are underway with the air force, with unmanned aircraft, and with the army, developing autonomous equipment transports and robotic combat vehicles (Klare, 2019). The United Kingdom Ministry of Defence described a BAE Systems fully autonomous aerial drone (‘Taranis’) as capable of gathering intelligence, dropping bombs, and protecting itself against manned and unmanned aircraft (Cartwright, 2010). Russia has been working on integrating technology to create a fully autonomous battle tank (‘T-14 Armata’) (McDonald, 2019). In 2020, China released a video of swarming technology which consists of smaller flying units with high explosive warheads designed to overwhelm an adversary’s defences (Hambling, 2020).

AWSs are framed in different levels of autonomy. In some levels human operators still retain control over their actions. The U.S. Department of Defence separates AWSs into the following concepts: ‘human out of the loop’ (machine is able to select and engage targets without further human intervention); ‘human on the loop’ (human-supervised in which they have the power to halt a machine’s engagement); and ‘human in the loop’ (semi-autonomous systems which require a human to select targets and allow engagement).

In late 2020, U.S. policy divided AWSs into different categories, each grounded to a human operator who makes the final decision over target selection and target engagement (Congressional Research Service, November, 2020). China’s swarm technology is another example of an AWS still requiring a human operator to designate targets.

Military stakeholders present AWSs in a positive light. AWSs provide numerous tactical advantages over human beings including increased operational and problem-solving capacity, and a lack of humane aspects considered to be weaknesses, such as emotional responses to violence and lack of discipline. Machines are considered less likely to act violently out of anger or for revenge (Chan, 2019). The Guardian quotes Gordon Johnson of the U.S. Joint Forces Command on ‘killer robots’: “They don’t go hungry. They’re not afraid. They don’t forget their orders” (Cartwright, para 12, 2010). A comment which highlights how some military officials may view the advantages of machines gaining ‘agent-like’ properties in warfare.

The System

Technological innovation in war is better depicted as relative change rather than linear development. Arguments relying on a single narrative of demographic, identity-based, or technological arguments often fail to acknowledge the ecological character of change (Grove, 2019, p. 80). Some anthropologists, following the footsteps of early strategists, have analysed war as a function of material factors, social institutions, and culture (Heuser, 2010, pp. 20-21).

Technological innovation has historical links to inter-polity competition, for example in the 19th and 20th century naval arms races between, Britain, France and Germany and the nuclear arms race of the Cold War between the U.S. and the Soviet Union after the second World War (Strachan, 2013, pp. 171-172). One polity could gain a temporary advantage over another by deploying new technology.

However, new technology such as the aircraft shapes the trajectory of other developments, such as anti-aircraft guns, which emerge to counter those new developments (Britannica, 1998). Additionally, many counter-insurgency operations have shown that an adversary can be difficult to combat even if it lacked the same technological means to fight a war. Hypothetically one could make a linear argument based on how advances in technology have made weapons more precise and more destructive. On the other hand, some battlefield developments are more a reflection of countering a specific adversarial advantage (for example the pike to the cavalry or the improvised explosive device) (Grove, 2019, pp. 113-136).

Rapid commercial development of robotics and artificial intelligence has incentivized military stakeholders to maintain involvement in these developments. While the commercial sector is considered to be in a stronger position to advance these developments, militaries cannot afford to be left behind as their adversaries could be the ones reaping the benefits. Each of the three major powers, the U.S., Russia, and China, are competing for dominance in the field of artificial intelligence and robotics, including that of AWSs (Congressional Research Service, November, 2020). Competition surrounds the development of cyber and robotics capabilities. In the public summary of the U.S. National Defense Strategy, it was highlighted that: “Inter-state strategic competition, not terrorism, is now the primary concern in U.S. national security” (The United States Department of Defense, 2018, p. 1). China’s premiere Xi Jingping implied inter-polity competition, warning about a new ‘Cold War’ in a speech at the World Economic Forum in 2021 (China’s President Xi Jinping warns against ‘new Cold War’ in Davos speech, 2021).

International cooperation is a significant aspect of modern inter-polity relations, which has shaped expectations on the use of force through mutually restrictive treaties. As a contrast to inter-polity rivalry and arms competition, institutions have emerged as platforms capable of facilitating conflict settlement without the need to escalate into violence. One major example, which followed two devastating world wars, was the establishment of the European Union which is often referred to as a ‘peace project’ first and foremost (Garnett, 2019). Other cooperative instruments are specific to regulating the use of weapons deemed unnecessarily cruel, such as the ‘Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW),’ covering restrictions on e.g., land mines, incendiary weapons and blinding laser weapons (Gill, 2019).

Another important element to the development of AWS technology is to do with social and political consequences in war. The discussion of war being driven by political aims has existed for centuries. In the 19th century, Otto August Rühle von Lilienstern wrote on the politics of war:

“.. The individual operations have military purposes; the war as a whole always has a final political purpose, which means that war is undertaken … in order to realise the political purpose upon which the State has decided.” (Lilienstern, as quoted in Heuser, B., 2010, pp. 8-12).

Common criticism surrounds recent conflicts, including U.S. involvement in Vietnam (1960s-1970s), Iraq and Afghanistan (from early 2000) where questions and answers over political objectives for those conflicts were considered neglected at substantial social and political cost (Hastings, 2021).

Historian Samuel Moyn proposes an alternative future trajectory leading to increasingly limited conflict, pointing to technology and warfare vis-à-vis ‘humane domination’ and extraterritorial policing as opposed to traditional forms of warfare and occupation, in which limited engagement and use of force is applied to pacify an occupied society (Moyn, 2016). On the flipside, unchecked, wars and conflicts tend to escalate. Several historians trace the origins of major conflicts to smaller events or wars, which have escalated into major conflict (including the First World War and Second World War) (Strachan, 2013, pp. 99-104). Conflict escalation relates to this discussion. One question being, is a fully autonomous ACTUV ‘submarine hunter’ able to refrain from releasing weapons when confronting a rival polity’s nuclear submarine, avoiding provocation which could escalate to a nuclear confrontation? (Klare, 2019).

Soldiers receive training and are often constrained by policy (in practical terms their decision-making is affected by guides, laws, and chain of command). There are rules set by international treaties and domestic policies which create a frame of accountability for violence during war. Therefore, legal frameworks exist to allow accountability to be assigned to those responsible for such decisions (United Nations Office on Drugs and Crime, 2017). Can the same be said of ‘killer robots’ using lethal force in similar contexts? Current policy addresses the complexity of military operations where there is risk to civilians through the term ‘proportionality’ which the U.S. Department of Defence considers as weighing decision-making on the use of force when its consequences directly affect civilians. In their law of war handbook, they state: “Under the law of war, judgements of proportionality often involve difficult and subjective comparisons (…)” (The United States Department of Defense, 2016, p. 61), directly implying that weighing such decisions in this context is driven by subjectivity of the stakeholders involved in the decision-making. Subjective decision-making for AWS technology is not discussed as much in the context of law or ethics as it is in the context of sensor perception, algorithms, and world models formulated by artificial intelligence.

This leads to a point raised by campaigners. How reliable is a ‘killer robot’ in action? Little is known on whether AWSs can distinguish combatants from non-combatants, and thus be capable of acting within the confines of the laws of conflict. Issues surrounding accountability for one’s own actions, lack of emotional intelligence or ‘moral compass’, proneness to miscalculations and accidents, and shifting the burden of conflict further to civilians, drive the debate for banning fully autonomous AWS technology and keeping decisions of target selection and engagement among humans, who exist in the realm of accountability for unlawful actions (Campaign to stop killer robots, 2020).

The political cost of war casualties creates incentive to replace aspects of military operations with machines. The relatively high financial costs of human-operated military operations further encourage the employment of AWSs, while simultaneously increasing capacity for intelligence-gathering and engaging in missions considered too high-risk for human operators.

World View

With the inter-polity competitive scene revolving around increasingly ambitious technological development, military stakeholders view AWS technology as an asset full of possibilities. The discourse suggests AWSs can be used as tools by providing them with agent-like capabilities. Banning the use of AWS technology at the national level could be counterproductive, as this can lead to rival polities gaining a technologically stronger position.

For the stakeholders driving restriction and regulation of AWS technology, ambiguity is a key descriptor of conflict. War is seen as too ambiguous and chaotic for cold algorithmic decisions to always determine the right, or realistic, course of action. Ambiguous enemies, complicated laws of conflict, and ethical considerations are significant reasons why fully autonomous solutions are viewed as an unrealistic alternative to manned solutions. AWSs can potentially further lower barriers to and remove the ‘guilt’ element from violence (Pasquale, 2020).

“It’s not always appropriate to fire and kill, (…) There are so many examples in the Iraq war where insurgents have been in an alleyway, marines have arrived with guns raised but noticed the insurgents were actually carrying a coffin. So the marines lower their machine guns, take off their helmets and let the insurgents pass. Now, a robot couldn’t make that kind of decision. What features does it look for? Could the box be carrying weapons?” (Elliott, as cited in Cartwright, para 18, 2010).

Deep Myth

The deep myth surrounding the development of AWS technology follows the view that technology can help achieve the upper hand. The belief that it is possible to gain advantage over adversaries through technological innovation. A metaphor for this is ‘the ultimate high ground’ as Jairus Grove writes, artificial intelligence allows for thinking which is faster than that of humans (Grove, 2019, pp. 214-217). The myth of a technological high ground is actively challenged by reciprocal nature of change, where simple and crude means can deal effective blows, in which case technology has little use as a meaningful counter.

CLA Charts

CLA Chart Military stakeholders
Litany Militaries are looking into the potential of fully autonomous machines capable of using deadly force (‘killer robots’).

 

Artificial intelligence is the format for decision-making in AWS technology.

System Technological innovation is historically relevant in the context of war. Commercial development is leading the development of robotics and artificial intelligence. International treaties facilitate arms regulation between polities. However, there are political and financial incentives to adopt AWS technology over human-operated alternatives.

 

War is followed by social and political consequences. War is often discussed in connection with political aims. National policies affect the way (and the kind of weapons) AWSs operate.

World View Competition and insecurity. Technological development is integral for reaching the high ground over adversaries. Commercial developments in robotics make AWS technology inevitable. AWS technology yields financial and political benefits if compared to current situation.

 

Myth/Metaphor The deep myth is that technological superiority is the high ground. Artificial intelligence provides the ultimate strategic advantage over adversaries. AWSs increase capacity for high-risk missions and surveillance.

Table 1: CLA Chart on Military stakeholders.

CLA Chart Campaigners against ‘killer robots’
Litany Programmable machines are not capable of reaching knowledge- or expertise-based reasoning required for high uncertainty situations, which are inherent to war. They also lack emotional intelligence and ‘moral compass.’

 

‘Killer robots’ cannot tell combatants from non-combatants.

 

AWS technology is unreliable for deployment on its own, prone to miscalculations, accidents and glitches which could lead to further problems down the road.

System Strong institutions act as forums for de-escalation where conflict emerges and for restricting the use of certain types of weapons. International treaties exist to regulate the types of weapons used by polities. This is yet to cover artificial intelligence and AWS technology.

As environmental and situational contexts become increasingly complicated, requirements for applied intelligence in problem-solving increase.

The social and political consequences of conflict are further accelerated by deployment of unmanned weapon systems.

Wars tend to escalate. The threat posed by the sheer quantity of existing nuclear weapon stockpiles still lingers, and escalation to a full nuclear confrontation is a substantial concern.

World View The world is seen as too chaotic and vulnerable for the deployment of self-thinking and acting weapon systems. Conflict may become easier to engage in and the burden of conflict could be further shifted on civilians.
Myth Artificial intelligence is an unreliable friend. It is prone to mistakes and immature to operate on its own without meaningful human control.

Table 2: CLA chart on Campaigners against ‘killer robots’

Conclusion

Looking at the different levels of litany, system, world view and deep myth, the current trajectory shows a paradigm shift between ‘tool’ and ‘agent’ for AWSs. The degree of uncertainty involved in AWS development, and the inherent uncertainty of war, both highlight the importance of acknowledging the role of fully autonomous AWSs in warfare in the context of broader policy and strategy. A novel idea would be to make each AWS itself a futurist, capable of engaging in a thought process evaluating different futures scenarios, consequences for actions, and mapping a preferred future which then guides its actions on the battlefield (considering national policies and international treaties).

A central question for the near future will be one of formulating policy which can be implanted into AWS decision-making processes so they can aspire to knowledge-based reasoning in uncertain situations. Without sufficient reliability among ‘killer robots,’ which are currently projected to gain even more independence from human control, there is little chance of maintaining a link between conflict and preferred political outcomes. Whether it be policing an occupied population, destroying a specific key target, or supporting human soldiers on the battlefield.

Additionally, the nature of technological innovation being more ecological than linear, drives counter-developments in the future which are likely to be novel and effective. Humans could well adapt to the ecology that follows developments in artificial intelligence, creating new or using pre-existing technology which makes AWSs redundant. Technological superiority is therefore unlikely to be the endgame, or ultimate strategic advantage. Unless developers consider the risk associated with deploying fully independent AWS technology holistically, it is likely the political drive to ban the machines will gain momentum. Laws of war, accountability and ethical considerations will all affect the shape or form (if any) AWS technology will take in the future.

References

Campaign to stop killer robots. (2020). The Threat of Fully Autonomous Weapons. https://www.stopkillerrobots.org/learn/

Cartwright, J. (2010, November 21). Rise of the robots and the future of war. The Guardian. https://www.theguardian.com/technology/2010/nov/21/military-robots-autonomous-machines

Chan, M. (2019, April 8). The rise of the killer robots – and the two women fighting back. The Guardian. https://www.theguardian.com/world/2019/apr/08/the-rise-of-the-killer-robots-jody-williams-mary-warehan-artificial-intelligence-autonomous-weapons,

Congressional Research Service. (2020, November). Artificial Intelligence and National Security. CRS, Congressional Research Service. https://fas.org/sgp/crs/natsec/R45178.pdf

Congressional Research Service. (2020, December). Defence Primer: U.S. Policy on Lethal Autonomous Weapon Systems. CRS, Congressional Research Service. https://fas.org/sgp/crs/natsec/IF11150.pdf

Cummings, L., M. (2017). Artificial Intelligence and the Future of Warfare. International Security Department and US and the Americas Programme, Chatham House. https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf

Encyclopedia Britannica. (1998). Antiaircraft gun. https://www.britannica.com/technology/antiaircraft-gun/additional-info#history

European Defence Agency. (2020, August). Artificial Intelligence: Joint quest for future defence applications. EDA, European Defence Agency. https://eda.europa.eu/news-and-events/news/2020/08/25/artificial-intelligence-joint-quest-for-future-defence-applications

Garnett, S. (2019, November 27). The European peace project. Eurozine Editorial, Eurozine. https://www.eurozine.com/the-european-peace-project/

Gill, S. (2019). The Role of the United Nations in Addressing Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. The Secretariat of the High-level Panel on Digital Cooperation, The United Nations. https://www.un.org/en/un-chronicle/role-united-nations-addressing-emerging-technologies-area-lethal-autonomous-weapons

Grove, J. (2019). Savage Ecology: War and Geopolitics at the End of the World. Duke University Press.

Hambling, D. (2020, October 14). China Releases Video Of New Barrage Swarm Drone Launcher. Forbes. https://www.forbes.com/sites/davidhambling/2020/10/14/china-releases-video-of-new-barrage-swarm-drone-launcher/?sh=42a3de912ad7

Hastings, M. (2021, January 31). American Universities Declare War on Military History. Bloomberg. https://www.bloomberg.com/opinion/articles/2021-01-31/max-hastings-u-s-universities-declare-war-on-military-history

Heuser, B. (2010). Evolution of Strategy: Thinking War from Antiquity to the Present. Cambridge University Press.

Inayatullah, S. (2017). Causal Layered Analysis. Futuribles International.

Klare, M. (2019, March). Autonomous Weapons Systems and the Laws of War. Arms Control Association. https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war

McDonald, H. (2019, September 15). Ex-Google worker fears ‘killer robots’ could cause mass atrocities. The Guardian. https://www.theguardian.com/technology/2019/sep/15/ex-google-worker-fears-killer-robots-cause-mass-atrocities

Moyn, S. (2016, October 22). The Lawfare Podcast: Samuel Moyn on “How Warfare Became Both More Human and Harder to End”. Lawfare. https://www.lawfareblog.com/lawfare-podcast-samuel-moyn-how-warfare-became-both-more-humane-and-harder-end

Pasquale, F. (2020, October 15). Machines set loose to slaughter: the dangerous rise of military AI. The Guardian. https://www.theguardian.com/news/2020/oct/15/dangerous-rise-of-military-ai-drone-swarm-autonomous-weapons

News and Current Affairs. (2021, January 25). China’s President Xi Jinping warns against ‘new Cold War’ in Davos speech. SBS. https://www.sbs.com.au/ondemand/video/1848149571527/chinas-president-xi-jinping-warns-against-new-cold-war-in-davos-speech

Strachan, H. (2013). The Direction of War: Contemporary Strategy in Historical Perspective. Cambridge University Press.

United Nations Office on Drugs and Crime. (2017, October). ICCS Briefing Note: Unlawful killings in conflict situations. UNODC, United Nations Office on Drugs and Crime. https://www.unodc.org/documents/data-and-analysis/statistics/crime/ICCS/Unlawful_killings_conflict_situations_ICCS.pdf

The United States Department of Defense. (2016, December). Department of Defense Law of War Manual. Office of General Counsel, Department of Defense. https://dod.defense.gov/Portals/1/Documents/pubs/DoD%20Law%20of%20War%20Manual%20-%20June%202015%20Updated%20Dec%202016.pdf?ver=2016-12-13-172036-190

The United States Department of Defense. (2018). Summary of the 2018 National Defence Strategy of The United States of America. Department of Defense. https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf