International security and the regulatory framework for Lethal Autonomous Weapons Systems have been a priority for the Global Tech Panel since its inception. With the EU strategy on Artificial Intelligence (AI) adopted in December 2018, the Panel members provide substantive expert input to help ensure the development of AI which can be used in weapons systems fully complies with international law and respects human dignity. The Panel meeting of 28 August 2019 - when the Global Tech Panel Members also met informally with EU Defence Ministers for the first time - was largely dedicated to this theme, as was that of 9 March 2019 in Seattle and of 2 April 2019 in Helsinki.
The strategy highlights that the High Representative of the Union for Foreign Affairs and Security Policy will, with the support of the Commission, build on consultations in the United Nations, the Global Tech Panel, and other multilateral fora, and coordinate proposals for addressing these complex security challenges.
How governments should manage the rise of AI to ensure we harness the opportunities while also addressing the threats of the digital era is a major international debate. The EU position is clear, and can be summed up in four points:
EU High Representative Mogherini, the Chair of the Global Tech Panel, outlined this position on 11 September 2018 in an address to the European Parliament, which thereafter adopted a Resolution to this effect.
In April 2018, the European Commission issued a Communication on Artificial Intelligence, initiating the elaboration of an EU Strategy on AI. On 7 December 2018, the Commission presented a Coordinated Plan on Artificial Intelligence prepared with Member States to foster the development and use of AI in Europe
The Communication includes a section on the "Security-related aspects of AI applications and infrastructure, and international security agenda" which highlights the following:
The application of AI in weapons systems has the potential to fundamentally change armed conflicts and therefore raises serious concerns and questions. The Union will continue to stress that international law, including International Humanitarian Law and Human Rights Law, applies fully to all weapons systems, including autonomous weapons systems, and that States remain responsible and accountable for their development and use in armed conflict. The EU's position further remains that human control must be retained in decisions on the use of lethal force and built into the full life-cycle of any weapons system.
The document points out that "The High Representative of the Union for Foreign Affairs and Security Policy will, with the support of the Commission, build on consultations in the United Nations, the Global Tech Panel, and other multilateral fora, and coordinate proposals for addressing these complex security challenges."
On 8 April 2019, a Communication on Building trust in human-centric AI launched a comprehensive piloting phase involving stakeholders on the widest scale in order to test the practical implementation of ethical guidance for AI development and use.
In parallel, the EU contributes to the work of the United Nations’ Group of Governmental Experts on Lethal Autonomous Weapons Systems, which has agreed on a first set of "Possible Guiding Principles".
Views from the Global Tech Panel members:
Future of Life Institute: Lethal Autonomous Weapons pledge