Volume 51
Issue
3
Date
2020

"Embodied AI" and the Direct Participation in Hostilities: A Legal Analysis

by Francis Grimal and Michael J. Pollard

This Article questions whether, under International Humanitarian Law (IHL), the concept of a “civilian” should be limited to humans. Prevailing debate within IHL scholarship has largely focused on the lawfulness (or not) of the recourse to autonomous weapons systems (AWS). However, the utilization of embodied artificial intelligence (EAI) in armed conflict, has yet to feature with any degree of prominence within the literature. An EAI is an “intelligent” robot capable of independent decision-making and action, without any human supervision. Predominately, the approach within the existing AWS/AI debate remains pre-occupied in ascertaining whether the military “system” is capable of determining/distinguishing between civilians and combatants. Furthermore, the built-in protection mechanisms within IHL are inherently “loaded” in favor of protecting humans from AWS, rather than vice-versa.

IHL makes a clear distinction between civilians and civilian objects. However, increasingly advanced EAI’s will make such a distinction highly problematic. The novel approach of this Article is twofold: to address the “EAI lacuna” in the broader sense, and to consider the application of EAI within a specific area of IHL: “Direct Participation in Hostilities (DPH)”. In short, can a robot “participate”? DPH is firmly grounded within the cardinal principle of distinction, and proportionality assessments, in order to afford protection to the civilian population during hostilities. Fundamentally, this Article challenges the International Committee of the Red Cross’s (ICRC) influential guidance on DPH. The Authors controversially submit that by continuing to follow that guidance, civilian objects will, under some circumstances, be afforded greater protection than human combatants.

To highlight this deficiency, the authors challenge the ICRC’s assertion that civilian status must be presumed where there is doubt, and instead subscribe to the prevailing alternative interpretation that DPH assessments need to be made on a case-by-case basis. To address the deficiency, the authors add the novel inclusion of a “Turing-like test” within DPH assessment.

A concrete example of EAI is that of a robot medic. The robot medic’s Hippocratic duty is to protect its patient’s life. In doing so (and given a suitable set of circumstances), the robot medic may wish to return fire against an attacker (here, the authors envisage a scenario during urbanized warfare). Would such an action constitute DPH (?), and what would the legal parameters look like in practice? Consequently, how would the attacker compute collateral damage in light of neutralizing the potentially “DPHing” robot? Implicit within such a discussion, is the removal of emotional attachments that, for many, are innate in DPH assessments. Indeed, does the ICRC’s tripartite test for “DPHing” contain understandable bias in favor of humanitarian considerations?

Continue reading “Embodied AI” and the Direct Participation in Hostilities: A Legal Analysis

Subscribe to GJIL