International Security and Lethal Autonomous Weapons

10.12.2018

Exploring how Artificial Intelligence can deliver for human advancement and international peace is a key pillar of the work of the Global Tech Panel. A top and hot priority is to help develop and implement the principles of ethical and responsible innovation to govern the development of technologies used in weapons systems, and to ensure accountability and compliance with international law.

International security and the regulatory framework for Lethal Autonomous Weapons Systems have been a priority for the Global Tech Panel since its inception. With the EU strategy on Artificial Intelligence (AI) adopted in December 2018, the Panel members provide substantive expert input to help ensure the development of AI which can be used in weapons systems fully complies with international law and respects human dignity. The Panel meeting of 28 August 2019 - when the Global Tech Panel Members also met informally with EU Defence Ministers for the first time - was largely dedicated to this theme, as was that of 9 March 2019 in Seattle and of 2 April 2019 in Helsinki.

The strategy highlights that the High Representative of the Union for Foreign Affairs and Security Policy will, with the support of the Commission, build on consultations in the United Nations, the Global Tech Panel, and other multilateral fora, and coordinate proposals for addressing these complex security challenges. 

How governments should manage the rise of AI to ensure we harness the opportunities while also addressing the threats of the digital era is a major international debate. The EU position is clear, and can be summed up in four points:

  • International  law, including International Humanitarian Law and Human Rights Law, applies to all weapons systems;
  • Humans must make the decisions with regard to the use of lethal force, exert control over the lethal weapons systems they use, and remain accountable for decisions over life and death;
  • The UN Convention on Certain Conventional Weapons is the appropriate framework to discuss regulate these kinds of weapons; and
  • Given the dual use of emerging technologies, policy measures should not hamper civilian research, including artificial intelligence (AI).

EU High Representative Mogherini, the Chair of the Global Tech Panel, outlined this position on 11 September 2018 in an address to the European Parliament, which thereafter adopted a Resolution to this effect.

https://twitter.com/eu_eeas/status/1039566506944864256 

In April 2018, the European Commission issued a Communication on Artificial Intelligence, initiating the elaboration of an EU Strategy on AI. On 7 December 2018, the Commission presented a Coordinated Plan on Artificial Intelligence prepared with Member States to foster the development and use of AI in Europe

The Communication includes a section on the "Security-related aspects of AI applications and infrastructure, and international security agenda" which highlights the following:

The application of AI in weapons systems has the potential to fundamentally change armed conflicts and therefore raises serious concerns and questions. The Union will continue to stress that international law, including International Humanitarian Law and Human Rights Law, applies fully to all weapons systems, including autonomous weapons systems, and that States remain responsible and accountable for their development and use in armed conflict. The EU's position further remains that human control must be retained in decisions on the use of lethal force and built into the full life-cycle of any weapons system.

The document points out that "The High Representative of the Union for Foreign Affairs and Security Policy will, with the support of the Commission, build on consultations in the United Nations, the Global Tech Panel, and other multilateral fora, and coordinate proposals for addressing these complex security challenges."

On 8 April 2019, a Communication on Building trust in human-centric AI launched a comprehensive piloting phase involving stakeholders on the widest scale in order to test the practical implementation of ethical guidance for AI development and use.      

In parallel, the EU contributes to the work of the United Nations’ Group of Governmental Experts on Lethal Autonomous Weapons Systems, which has agreed on a first set of "Possible Guiding Principles".

 

Views from the Global Tech Panel members:

https://twitter.com/mustafasuleymn/status/1037689074939781121

https://twitter.com/mustafasuleymn/status/1005517866031108100

https://twitter.com/sundarpichai/status/1004800469405876226

https://twitter.com/rsiilasmaa/status/1004960306630746112

 

Future of Life Institute: Lethal Autonomous Weapons pledge


See Also