1. Ecosystem of Excellence and Trust
AI is a collection of technologies that combine data, algorithms and computing power. Europe can combine its technological and industrial strengths with a high-quality digital infrastructure and a regulatory framework based on its fundamental values to become a global leader in innovation in the data economy and it can develop an AI ecosystem that brings the benefits of the technology to the whole of European society and economy.
It is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection.
2. CAPITALISING ON STRENGTHS IN INDUSTRIAL AND PROFESSIONAL MARKETS
Europe has developed a strong computing infrastructure essential to the functioning of AI. Europe also holds large volumes of public and industrial data, the potential of which is currently under-used. It has well-recognised industrial strengths in safe and secure digital systems with low-power consumption that are essential for the further development of AI. Harnessing the capacity of the EU to invest in next generation technologies and infrastructures will increase Europe’s technological sovereignty for the data economy.
3. SEIZING THE OPPORTUNITIES AHEAD: THE NEXT DATA WAVE
Each new wave of data brings opportunities for Europe to position itself in the data-agile economy and to become a world leader in this area. Europe will continue to lead progress in the algorithmic foundations of AI, building on its own scientific excellence. Combining symbolic reasoning with deep neural networks may help us improve explainability of AI outcomes.
4. AN ECOSYSTEM OF EXCELLENCE
To build an ecosystem of excellence that can support the development and uptake of AI across the EU economy and public administration, there is a need to step up action at multiple levels.
5. AN ECOSYSTEM OF TRUST: REGULATORY FRAMEWORK FOR AI
The Commission established a High-Level Expert Group that published Guidelines on trustworthy AI in April 2019 and also published a Communication welcoming the seven key requirements identified in the Guidelines of the High-Level Expert Group:
A. AI Problem Definition
B. Possible Adjustments to Existing EU Legislative Framework Relating to AI
The Commission is of the opinion that the legislative framework could be improved to address the following risks and situations:
C. Scope of a Future EU Regulatory Framework
The EU has a strict legal framework in place to ensure inter alia consumer protection, to address unfair commercial practices and to protect personal data and privacy. In addition, the acquis contains specific rules for certain sectors (e.g. healthcare, transport). These existing provisions of EU law will continue to apply in relation to AI, although certain updates to that framework may be necessary to reflect the digital transformation and the use of AI
As a matter of principle, the new regulatory framework for AI should be effective to achieve its objectives while not being excessively prescriptive so that it could create a disproportionate burden, especially for SMEs. To strike this balance, the Commission is of the view that it should follow a risk-based approach.
D. Types of Requirements
First, there is the question how obligations are to be distributed among the economic operators involved. Second, there is the question about the geographic scope of the legislative intervention. It is the Commission’s view that, in a future regulatory framework, each obligation should be addressed to the actor(s) who is (are) best placed to address any potential risks.
F. Compliance and Enforcement
In view of the high risk that certain AI applications pose for citizens and our society, the Commission considers at this stage that an objective, prior conformity assessment would be necessary to verify and ensure that certain of the above mentioned mandatory requirements applicable to high-risk applications are complied with. The conformity assessments for high-risk AI applications should be part of the conformity assessment mechanisms that already exist for a large number of products being placed on the EU’s internal market.
G. Voluntary Labelling for No-High Risk AI Applications
For AI applications that do not qualify as ‘high-risk’ and that are therefore not subject to the mandatory requirements an option would be, in addition to applicable legislation, to establish a voluntary labelling scheme.
A European governance structure on AI in the form of a framework for cooperation of national competent authorities is necessary to avoid fragmentation of responsibilities, increase capacity in Member States, and make sure that Europe equips itself progressively with the capacity needed for testing and certification of AI-enabled products and services. In this context, it would be beneficial to support competent national authorities to enable them to fulfil their mandate where AI is used.
For Europe to seize fully the opportunities that AI offers, it must develop and reinforce the necessary industrial and technological capacities. As set out in the accompanying European strategy for data, this also requires measures that will enable the EU to become a global hub for data.
The European approach for AI aims to promote Europe’s innovation capacity in the area of AI while supporting the development and uptake of ethical and trustworthy AI across the EU economy. AI should work for people and be a force for good in society.
The White Paper on Artificial Intelligence is now open for public consultation until 19 May 2020, and interested individuals may post comments online.