Following Urusla von der Leyen’s  comments on the need for a “coordinated European approach to the human and ethical implications of AI ”,  the Commission has unveiled the EU white paper on AI. It outlines its key priorities and steps to support the development of AI. The white paper sets forth the EU’s plans to become a leader in the ethical development and AI application, which, it argues, can be turned into a competitive advantage over China and the US. 

 

Broadly, the white paper emphasizes the importance of undertaking a coordinated and uniform approach to AI in the EU, in order to avoid any fragmentation of the single market. Most importantly, it  takes a risk-based approach to regulating AI, with the aim of establishing an “ecosystem of AI trust”. It suggests focusing on “high-risk” AI applications, which it defines as AI applications in a high-risk sector and in a high-risk manner. Sectors they describe as likely to be high-risk include: healthcare, transport, energy, and parts of the public sector such as asylum, migration, border controls and judiciary, social security, and employment services. AI applications also qualify as high-risk if they have effects on the rights of individuals or companies and are likely to generate significant risks (treating patients, for example). AI applications deemed high-risk would be subject to more stringent governance and regulatory measures, and would be subject to compulsory assessments before entering the market. The underlying algorithms and the data used in the development of the technology would also be subjected to liability and certification checks. The white paper also proposes that non high-risk AI applications, in addition to applicable legislation, could also be subject to a voluntary labelling scheme in order to “signal that their AI-enabled products and services are trustworthy”. 

 

When it comes to strong consumer protection, the EU seems to be on the right track. It already has a substantial amount of laws on this, such as GDPR. It is encouraging to see that the EU is looking to update its legal standards and make them more robust, and it has the chance to set a new regulatory gold standard. However, the white paper can be criticised on various points. 

 

Firstly, the tech industry has argued that the approach focuses too much on the risks of AI, which will stifle innovation, particularly for startups. With the definition of “high-risk” being so broad and poorly defined, only large companies will be able to afford the costs of compliance, and investment will be delayed for services that are already restricted under the EU’s data privacy laws. In addition, conformity assessments undertaken to determine whether AI products should be allowed on the EU market will also contribute to delays and costs for companies. Such heavy-handed rules will make it much harder for EU businesses to use AI systems in many areas of the economy, and might push some companies to launch their products in more friendly markets. This will damage the EU’s global competitiveness, particularly given its plan to become a “global leader in innovation in the data economy and its applications”.  

 

Secondly, the white paper argues that infrastructures should support the creation of European data pools in order to ensure trustworthy AI. However, the idea that European AI algorithms should be trained on EU data raises a few issues. This would significantly limit the data that companies in the EU could use, and would force some companies to retrain their AI systems to operate within the EU, which would again increase costs and affect competitiveness. In addition, EU data is not globally representative enough and solely relying on it would affect fairness and diversity.

 

Thirdly, some have argued that previous versions of the document were more daring with their recommendations. Earlier drafts contained many proposals, including a temporary ban on the use of facial recognition (there are convincing arguments against this ban however) and special rules for the public sector. It would seem however that concessions might have been made in these divisive debates. It is also unclear how many of these crucial laws and regulations remaining in the document will be implemented.

 

Finally, while it is important to focus on the risks of AI application, creating a clear division between high-risk and low-risk can be overly simplistic. This approach assumes that it is possible to finitely calculate risk, and as a result, many AI applications that might have societal consequences are excluded from the regulatory proposal. In addition, some AI applications might be low-risk to some but high-risk to others, and there might be some high-risk applications in low-risk sectors and vice-versa. The US Government’s Chief Technology Office, Michael Kratsios also criticized this blunt distinction, stating that it is better to treat risk as a spectrum when it comes to regulating AI technologies. He argues that in this respect, the US’s approach is more effective. 

 

The proposals are open for public consultation until the 19th of May 2020.

 

Written by Zoe Caramitsou-Tzira.

LEARN MORE ABOUT AI

AI and Climate Change: An Overview
As we continue to face serious environmental challenges, it is increasingly tempting to imagine a innovative relationship between AI and climate change. There are a growing number of results...
Ethical Considerations for Artificial Intelligence in Government
Problems of AI for Government   There exists a degree of uncertainty in discussions about AI, driven by popular science depictions of intelligent machines as either possessing human-level...
Ethical considerations for the use of AI in and around military conflicts
Seeking to build a virtual fence dividing North and South Vietnam, the US military forayed into the modern electronic battlefield in 1970. Amongst the tens of thousands of devices it dropped into...
The AI Sector Deal, explained
The AI Sector Deal, explained What is the AI Sector Deal?The AI Sector Deal is HM Government’s first official report on the state and future of artificial intelligence in the UK. Published in April...
The role of AI in data analytics and consequences for its ethical impact
The need for data analytics Data analytics is the process of observing, organising and then understanding huge quantities of raw data. With the world as it is becoming ever increasingly reliant on...