EU Statement – UN Global Digital Compact: Deep Dive on AI and other emerging technologies

25 May 2023, New York – Statement on behalf of the European Union and its Member States at the 77th Session of the United Nations General Assembly Global Digital Compact Deep Dive on Artificial Intelligence and other emerging technologies

 

Thank you for giving me the floor Mr/Ms Co-facilitator.

I have the honour to deliver this statement on behalf of the EU and its Member States.

Powered by vast processing capacity and large quantities of data, AI has already outperformed humans in some tasks, and has emerged as a crucial area of strategic importance. Recognizing the fundamental need to make this technology work for the benefit of our citizens and societies, the EU is advocating for reliable, transparent, trustworthy and human-centric AI with robust governance to safeguard its benefits while protecting the public interest.

 

We recognize that AI and other emerging technologies are key drivers of economic progress. Given its likely impact on global economy and our everyday lives, it is imperative that we work towards a human-centric and innovation-friendly approach to AI based on fundamental rights and fundamental values such as democracy and the rule of law.

For the EU, safety and fundamental rights form the bedrock of all considerations concerning the entire lifecycle of new and emerging technologies. Trust in AI systems is crucial for their acceptance. Safety, human oversight, transparency and risk management are amongst the core aspects by which to introduce regulation into tech applications.

Information asymmetries continue to exist between AI developers in the private sector and policymakers responsible for the development of AI-related policies. This is being exacerbated by the applications of large language models and generative AI. These tools, capable of generating credible, misleading content, could escalate disinformation campaigns and aggravate trust issues, heightening policy and regulation challenges. Addressing these new challenges and risks, the EU aims to reinforce regulations, such as The Digital Services Act and foster international cooperation to establish shared norms around generative AI use, including future of work, governing of intellectual properties and know-how, sharing liabilities of AI actors and access to APIs.

From the beginning, the EU approach to AI has aimed to create two ecosystems: one of excellence and one of trust. This is essential in order to promote the development and deployment of AI while addressing the risks associated with certain uses of this technology. The EU Coordinated Plan on AI translated these objectives into a set of concrete actions to be implemented by the European Commission and the EU Member States.

The Artificial Intelligence Act, presented by the European Commission in April 2021, will make sure that Europeans can trust what AI has to offer. Proportionate and flexible rules address the risks posed by certain uses of AI systems, foster innovation, and create a level playing field and legal certainty. The new rules will be applied directly in the same way across all the EU Member States based on a risk-based approach. Systems considered high-risk will have to comply with several requirements before being put or being used in the EU market. The AI Act is designed to be future proof: it will include flexible mechanisms that allow the high-risk use cases to be adapted as the technology evolves.

While AI can bring solutions to many societal challenges, it risks intensifying inequalities and discrimination. Algorithms and related machine-learning risk repeating, contributing to or amplifying unfair biases that programmers may not be aware of or that are the result of specific data selection. It is therefore important to ensure development is done by sufficiently diverse communities.

The Commission proposal for the AI Act encourages codes of conduct for low-risk AI systems and has introduced transparency obligations for certain AI systems. When using AI systems, such as bots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or to step back.

In addition, member states of the European Union and the Council of Europe are currently collaborating to develop a legally binding instrument that addresses artificial intelligence in a comprehensive manner. This instrument is based on the Council of Europe's established standards concerning human rights, democracy, and the rule of law. It focuses on universal principles that foster innovation and welcomes participation from non-member states. Furthermore, it takes into consideration other relevant international legal frameworks that already exist or are being developed.

The EU will continue promoting a human-centric and balanced approach to AI within the Single Market and globally, covering the whole lifecycle of digital technologies - including design, development, deployment, evaluation, and use.

It will also continue cooperation at the international level, bilaterally and within the framework of multilateral fora to achieve better coordination, collaboration, and governance. We encourage and expect that this approach to be reflected in the Global Digital Compact.