European Union External Action

Speech by High Representative/Vice-President Federica Mogherini at the plenary session of the European Parliament on Autonomous Weapons Systems

Strasbourg, 11 September 2018, 11/09/2018 - 19:45, UNIQUE ID: 180911_6
HR/VP speeches

Check against delivery!

 

Thank you Mr President.

 

Thank you for putting artificial intelligence on the agenda. I know that this might look like a debate about some distant future, or about science-fiction. It is not. Artificial intelligence is already part of our daily life – when we use our smartphone or when we watch a TV series, we understand that very well. And it is now starting to be weaponised, and to impact on our collective security. So I think it makes a lot of sense to have this debate here today.

 

We are entering a world where drones could fire – and could kill – with no need for a man to pull the trigger. Artificial intelligence could take decisions on life and death, with no direct control from a human being. The reason why we are here today is not that we are afraid of technology – let me start by that. Human ingenuity and technological progress have made our lives easier and more comfortable. The point is that scientists and researchers should and must be free to do their job knowing that their discoveries will not be used to harm innocent people.

 

After World War Two, a large number of nuclear scientists started to fight against nuclear weapons, so that research could focus on the peaceful applications of nuclear energy. Today we witness something very similar. Scientists and artificial intelligence pioneers are warning us of the dangers ahead. Some of them are refusing to work for the military. I believe the best way ahead is to agree on some common principles regarding the military use of artificial intelligence. Define the boundaries of its applications, so that within those limits, scientists are free to explore the immense positive potential of artificial intelligence.

 

This is a core objective in the European Commission’s Communication on Artificial Intelligence and its follow-up work that will also cover security matters. At the beginning of this month, the United Nations’ Group of Governmental Experts on Lethal Autonomous Weapons Systems agreed on a first set of "Possible Guiding Principles". This is the first time – after a number of failures – towards the same approach.

 

It is a good starting point, and the new Guiding Principles are very much in line with the positions we have developed inside the European Union, under the External Action Service’s coordination. Let me say that this is one of the points on the agenda where I could easily sit on both sides of the hemicycle, because there is work being done on the Commission side and work being done on the Council side under EEAS leadership.

 

The group of experts stresses that International Humanitarian Law applies to all weapons systems, both old and new, and that all weapons must always remain under human control. The experts have agreed that the UN Convention on Certain Conventional Weapons is the appropriate framework to regulate this kind of weapons, and that any policy measure must not interfere with the civilian uses of artificial intelligence. This is only the first stage of the discussion, and there is no agreement yet on any regulation. Work will continue within the Group of Governmental Experts during the course of next year.

 

I believe we Europeans have an important contribution to bring to this table: our Member States hold different views – it is true -, but we all agree that the use of force must always abide by international law, including International Humanitarian Law and Human Rights Law, and this fully applies to Autonomous Weapons Systems. States – and human beings – remain responsible and accountable for their behaviour in an armed conflict, even if it involves the use of autonomous weapons. This is why our position at the UN has been that humans should always make the decisions on the use of lethal force, and always exert sufficient control over lethal weapons systems.

 

Of course, we do not have all the answers and all the solutions. Also for this reason, I decided a few months ago to set up a panel with tech leaders from different backgrounds and expertise. We had a first meeting in Brussels in June and together with all of them we started a conversation between the tech world and the foreign and security policy community - and my intention as High-Representative is to bring this issue also on the table of the Defence Ministers in one of our next Council meetings - on how we can harness the opportunities of the digital era while also addressing the rising threats. Among the members of this “Global Tech Panel” are some of the experts on artificial intelligence who have been most vocal on the issue of Lethal Autonomous Weapons. 

 

Together with the experts’ community, we can find a solution that is both prudent and innovative. We can continue exploring the immense possibilities of artificial intelligence, and at same time guarantee the full respect of human rights.  This is a collective responsibility, and I am particularly glad that the European Parliament is leading the way and driving the conversation on these issues. So I am looking forward very much to listening to your views on this extremely important part of our common work.

 

Thank you.

Editorial Sections: