Autonomous weapons must remain under human control, Mogherini says at European Parliament

14.09.2018

The use of force must always abide by international law, including international humanitarian law and human rights law, and this fully applies to autonomous weapons systems. States – and human beings – remain responsible and accountable for their behaviour in an armed conflict, even if it involves the use of autonomous weapons.

How governments should manage the rise of Artificial Intelligence (AI) to ensure we harness the opportunities while also addressing the threats of the digital era is one of the major strands of open debate the EU has initiated together with tech leaders.

The EU has a clear and strong position in the debate on lethal autonomous weapons systems such as drones, which can be summed up in 4 points:

  • International  law, including International Humanitarian Law and Human Rights Law, applies to all weapons systems;
  • Humans must make the decisions with regard to the use of lethal force, exert control over the lethal weapons systems they use, and remain accountable for decisions over life and death.;
  • The UN Convention on Certain Conventional Weapons is the appropriate framework to discuss regulate these kinds of weapons; and
  • Given the dual use of emerging technologies, policy measures should not hamper civilian research, including artificial intelligence (AI)

This position was laid out clearly by former EU High Representative Federica Mogherini on 11 September in an address to the European Parliament, and reflected in the Resolution adopted thereafter by the house.

https://twitter.com/eu_eeas/status/1039566506944864256

"We are not afraid of technology. Human ingenuity and technological progress have made our lives easier and more comfortable, said Mogherini. "The point is that scientists and researchers should and must be free to do their job knowing that their discoveries will not be used to harm innocent people."

 

Emerging technologies, including AI, that are used in weapons systems must be developed and applied according to the principles of responsible innovation and ethical principles, such as accountability and compliance with international law.  Doubts about their legitimacy or respect for human dignity should be addressed in a clear regulatory manner that defines the prohibited aspects of such technology.

The EU has begun to address this issue with a European Commission Communication on Artificial Intelligence.

The United Nations’ Group of Governmental Experts on Lethal Autonomous Weapons Systems has also agreed on a first set of "Possible Guiding Principles".

https://twitter.com/eu_eeas/status/1039819544964030465

Getting the balance right requires an open discussion between a variety of stakeholders, and is a topic addressed by the Global Tech Panel first convened by the former EU High Representative Federica Mogherini in June and set to meet again on the side-lines of the UN General Assembly this month in New York.

"We [governments] do not have all the answers and all the solutions" and the EU has therefore "started a conversation between the tech world and the foreign and security policy community […] on how we can harness the opportunities of the digital era while also addressing the rising threats," Mogherini said. "Among the members of this 'Global Tech Panel”' are some of the experts on artificial intelligence who have been most vocal on the issue of lethal autonomous weapons."

"Together with the experts’ community, we can help find a solution that is both prudent and innovative. We can continue exploring the immense possibilities of artificial intelligence, and at same time guarantee the full respect of human rights and international law," said Mogherini.

Federica Mogherini intends to put this on the agenda of EU Defence Ministers and also to continue to draw on the expertise of the Global Tech Panel in this regard. 


See also