By Elena Sánchez Nicolás

BRUSSELS - From chatbots to self-driving cars, artificial intelligence (AI) is expected to witness huge growth in the next years - which is seen by the EU as an opportunity to challenge the US and Chinese dominance in the field.

In the past, the EU has put forward proposals to increase research on AI, make more data available, enhance business cooperation, and develop national strategic plans for the deployment of these technologies in member states.

However, the potential risks on fundamental rights that emerge from using these technologies in certain areas, such as hiring decisions or law enforcement, have increased calls for a harmonised legal framework across the bloc.

Following the presentation of its White Paper last year, the European Commission unveiled on Wednesday (21 April) the first-ever legal framework to regulate the use of AI systems in Europe - aiming to increase trust in these technologies and accelerate its uptake.

"On artificial intelligence, trust is a must, not a nice to have," EU digital commissioner Margrethe Vestager said.

"With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted," she added.

This risk-based regulatory approach on the uses of AI distinguishes between an unacceptable risk, a high risk, and limited or minimal risk.

Under this proposal, a European Artificial Intelligence Board would be established to facilitate the implementation of this regulation.


Loopholes on mass surveillance


AI applications that manipulate human behaviour (for example, toys using voice assistance encouraging dangerous behaviour of minors) or systems that allow Chinese-style 'social scoring' would be banned.

Additionally, the use of remote biometric identification systems, such as facial recognition, for law enforcement in public spaces would also be prohibited.

However, as in the EU's data protection rules (GDPR), certain exceptions would apply - such as searching for victims of crime or missing children, identifying a perpetrator or suspect of a criminal offence, or preventing an imminent threat, such as a terrorist attack.

Nevertheless, such uses would be subject to authorisation by a judicial or other independent body, and limited in time and geographic reach.

While the use of these systems by law enforcement would require developers to follow stricter rules, digital rights activists warned there are still "loopholes" that allow mass surveillance.

"Biometric mass surveillance is not a dystopian fantasy...as long as governments and companies across Europe use these unlawful tools, living under big brother is our reality," said Matthias Marx, a member of the network European Digital Rights, which has long advocated for a total ban in public spaces.

In March, a survey revealed that most Europeans (55 percent) oppose the use of facial recognition in public spaces.


Risk evaluations


The regulation also covers AI applications in areas considered "high risk" because they could undermine people's safety or fundamental rights, such as education (for example, scoring of exams), employment (for example, CV-sorting software for recruitment) or public services (for example, credit-scoring denying citizens the opportunity to obtain a loan).

AI developers in these fields will have to develop a risk assessment, ensure high-quality datasets, a high level of explainability and human oversight, among other requirements, before they can enter the markets.

"After reading this regulation, it is still an open question whether future start-up founders in 'high risk' areas will decide to launch their business in Europe," warned the trade association Digital Europe in a statement.

Furthermore, developers of AI-powered technologies with a "limited risk" - such as chatbots or the so-called deep fakes - would be obliged to clarify to users that they are interacting with a machine - unless this is obvious.

AI-enabled applications representing only minimal or no risk for citizens' rights or safety - such as video games or spam filters - are not regulated.

Companies that fail to comply with the legislation can face fines of up to €30m or six percent of their global turnover.


4.6 million jobs in a decade?


Meanwhile, Brussels has also updated a coordinated plan with member states to accelerate investment in AI.

Recovery packages, where 20 percent of the funding is proposed for digitalisation (a total amount of €134bn), provides an "unprecedented opportunity" to invest in AI to lead globally in the development of these technologies, according to the commission.

Additionally, the EU wants to combine more than €20bn in public and private funding per year over the next decade to invest in AI.

It is estimated that a common EU framework on AI can produce €294.9bn in additional GDP and 4.6 million extra jobs by 2030.

The fast-growing impact of artificial intelligence will be the biggest challenge for business and consumers in Europe's single market of tomorrow.

 

Banners

Videos