EU proposes a Regulation for Artificial Intelligence

9 May 2022 |

EU proposes a Regulation for Artificial Intelligence

In April 2021, the European Commission (“EU”) published a Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Proposed AIA) with the purpose of creating a sustainable, secure, trustworthy and human-centric framework for Artificial Intelligence (AI) applications. The Proposed AIA will impose significant obligations for both EU and non-EU businesses. We outline below some of the key criteria and considerations on the extraterritorial scope of the AIA, and the possible impact on users and providers outside the EU.

 

Definition of AI systems

 

The Proposed AIA sets a broad definition for AI systems covering software embodying machine-learning approaches, traditional statistical approaches, and rules-based approaches. The Council Presidency revised the definition to prevent the inclusion of classic software systems not considered as AI systems in the scope of AIA, to ensure legal clarity and for a better reflection of what is an AI system. Despite the amended definition, Annex 1 includes methods which may create uncertainty as well as doubt in scope for businesses. For example, some people suggest that the “Bayesian estimation” in Annex 1 of the AIA should be deleted because “Bayesian techniques are not artificial intelligence but well proven mathematical formulas”.  Nevertheless, that does not mean that all AI technologies defined as an AI system under the AIA will be subject to the applicable obligations.

 

Key roles

 

Although the main emphasis of the Proposed AIA is on providers and users of AI systems, there are also provisions which impose obligations to importers, product manufacturers, distributors, and authorised representatives of providers. Under this Proposal, a “provider” is either an organisation that develops the AI system or a third-party provider.

 

Moreover, the AIA will apply to a user which is stated as “any natural or legal person, agency, public authority or other body using an AI system under its authority”. The definition of the user was revised and the term “except where the AI system is used in the course of a personal non-professional activity” was removed from the proposed AIA. It is worth noting that the AIA will not apply to AI systems which are used exclusively for military purposes. There is also no provision under the Proposed AIA which covers AI systems and their outputs which are used for the sole purpose of research and development.

 

Representatives within the EU

 

When the importer cannot be identified, the provider of a high-risk AI system not established within the EU needs to appoint an authorised representative which is established within the EU to perform the tasks of the provider. The EU representative should keep a copy of the EU declaration of conformity and the technical documentation at the disposal of the national authorities, to cooperate with competent national authorities on any action in relation to the high-risk AI system and to provide the national competent authority with all the documents and information which are relevant to demonstrate the conformity of a high-risk AI-system when the request is reasoned.

 

Extraterritorial application

 

The AIA states that the provisions will apply outside the EU borders and the main criterion is whether impact of the AI system occurs or has effect within the EU. The legal or natural person which is responsible to make the AI system available for the first time in the EU market or to supply an AI for the first use directly to the user or for his/her own use in the EU market, will be subject to the obligations under AIA as a provider. Furthermore, with regards to providers who are not established within the EU, when the output produced by the system is used within the EU, they are subject to the obligations of the AIA. Despite the fact that there is no definition on “output” by the Proposed AIA, the definition of the AI system refers to outputs in the form of content, recommendations, predictions, decisions influencing the environments that it interacts with, etc.

 

Fines

-

The Proposed AIA sets administrative fines which depend on the severity of the breach. Fines can be demanding for AI tech companies all over the world, potentially reaching 6% of global turnover in relation to the most serious infringements.

 

*The Proposed AIA is still at the stages of negotiation and discussions by the European bodies responsible.