10 minute read 15 May 2021
ey connect

European draft Regulation on artificial intelligence: Key questions answered

By Peter Katko

EY Global Digital Law Leader

Leader in digital law, providing legal approaches for designing digital transformation in a legally compliant way.

10 minute read 15 May 2021
Related topics Law AI

Main points to be up-to-date.

Executive summary
  • The European Commission has taken a step forward in its strategy to achieve a trustworthy AI environment in the EU, proposing legislation covering the supply and use of AI in the form of a regulation.
  • The Regulation will apply to the AI used or placed on the European Union (EU) market, irrespective of whether the providers are based within or outside the EU.
  • AI systems will be subject to different levels of obligations or prohibitions depending the risks posed to the health, safety and fundamental rights of persons in the EU.
  • An antitrust/GDPR-style sanctioning regime is proposed, with fines up to Euro 30m or 6% of global annual turnover. Obligations and requirements are addressed not only to providers of AI systems but also to stakeholders that use those systems or that are part of the value chain (manufacturers, importers, distributors).
  • Data governance is taken to a new level as it will now need to be more comprehensive and subject to not only GDPR obligations but also to this new AI regulation, given the risk of higher sanctions.

On 21 April 2021, the European Commission published its proposed Regulation on Artificial Intelligence (draft Regulation), together with a Communication on “Fostering a European approach to Artificial Intelligence”.

The intention to regulate AI was made clear by the Commission president, Ursula von der Leyen, from the beginning of her mandate in 2019. Since then, different preparatory acts have followed, including a White Paper on the topic and the EU Parliament resolutions issued in October 2020 (on Ethics , Liability and Intellectual Property Rights). Those preparatory acts are all based on the 2018 Communication on AI and the High-Level Experts Group Guidelines on Trustworthy AI, which are currently applicable and constitute a framework for organizations participating in the AI ecosystem.

Together with the draft Regulation, the Commission has also proposed a new regulatory framework for machinery products, updating safety rules in order to build trust in new products and digital technologies. Among other objectives, the new draft Machinery Regulation, which will replace the Machinery Directive 2006/42/EC, aims to address the risks posed by emerging digital technologies (such as robotics and Internet of Things in addition to AI) and will be complementary to the AI Regulation. The Regulation will cover the safety risks posed by AI systems, while the Machinery Regulation will apply in relation to the safe integration of AI systems into overall machinery to avoid compromising the safety of the machinery product as a whole.

ey-chapter-1
(Chapter breaker)
1

Question 1

What AI will fall within the scope of the draft Regulation?

A single definition of AI is proposed

Software that is developed with one or more of the techniques and approaches listed in Annex I (broadly speaking, machine learning, logic and statistical approaches) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with…”

It is important to keep in mind that the draft Regulation applies to the way AI is used, not to the technology itself. The definition is intended to be future-proof and technology-neutral, and the aim is to be comprehensive.

There are some types of AI that are out of the scope of the draft Regulation. One is legacy AI, meaning AI placed on the market or put into service before the date of application. The Commission has indicated that it wishes to expedite its legislative process to bring this Regulation into force, perhaps as early as 2022. Organizations hoping to bring AI tools into operation quickly to exclude it from the Regulation’s scope should note that if the tool is subject to significant changes in its design or intended purpose after the application date, then it will be subject to the Regulation in any case.

Other exempt technologies are AI systems which are components of the large-scale government IT systems established by EU laws in the areas of freedom, security and justice (e.g., Schengen visa databases, criminal records or security systems) that have been placed on the market or put into service before 12 months after the date of application of the Regulation (unless the replacement or amendment of those laws leads to a significant change in the design or intended purpose of the AI system or AI systems concerned). Finally, the AI systems for military purposes and for public authorities in third countries or international organizations are also out of scope.

ey-chapter-2
(Chapter breaker)
2

Question 2

To which stakeholders does it apply?

The draft Regulation sets out obligations across stakeholders throughout the entire value chain

This includes not only providers[1] bringing AI tools to market or implementing AI systems in the EU but also manufacturers, distributors, importers and users of such AI systems.

Any stakeholder in the value chain will be considered a provider in any of the following circumstances:

(i)    if utilizing the AI tool in the market under its name or trademark,

(ii)   if modifying the intended purpose of the AI system, or

(iii)  if making a substantial modification.

Where cases (ii) and (iii) apply, the initial provider will no longer be responsible.

Users shall be defined as any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of personal non-professional activity. As a result, all professional users making use of AI systems are supposed to be subject to the applicable obligations under the draft Regulation.

[1]. Means natural or legal person, public authority agency or other body that develops and AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.
ey-chapter-3
(Chapter breaker)
3

Question 3

Where does it apply?

The Regulation is extraterritorial in scope

It will apply not only within the EU, but also applies to providers or users that are established or located outside the EU territory and:

(i)    which place or put into service AI systems in the EU, or

(ii)   the AI output produced by the system is used in the EU.

ey-chapter-4
(Chapter breaker)
4

Question 4

A risk-based approach to regulation: different obligations for different AI purposes

The Regulation classifies AI into four groups

These groups are:

  • Prohibited,
  • High risk,
  • Low risk, and
  • Minimal risk.

The severity of the regulatory approach depends on the classification of the AI in question. The proposed legislation sets out a regulatory structure that bans some uses of AI, heavily regulates high-risk uses and lightly regulates less risky AI systems.

Organizations should be aware that the Commission intends to prohibit certain uses of AI which are deemed to be unacceptable because they:

(i)   Deploy subliminal techniques or exploit vulnerabilities of specific groups of persons due to their age or disability, in order to materially distort a person’s behavior in a manner that causes physical or psychological harm;

(ii)  Lead to ‘social scoring’ by public authorities, or

(iii) Conduct ‘real time’ biometric identification in publicly available spaces (with some derogations).

At the other end of the scale, for lower or minimal risk AI systems, the framework indicates that there are unlikely to be major restrictions or none at all. In between, high-risk AI is permitted subject to compliance with certain requirements; and non-high-risk AI (i.e., impersonation, bots), which is permitted subject to information and transparency obligations. This might include making transparent to humans that they are interacting with an AI system and that emotion recognition or biometric categorization is applied, as well as labeling so-called ‘deep fakes’ (with some exceptions).

Due to this risk-based approach, most of the obligations and requirements outlined in the draft Regulation refer to high-risk AI. The classification of an AI system as high-risk is based on its intended purpose (in line with product safety legislation). It does not only depend on the function performed but also on the specific purpose and modalities for which the system is used.

There are two broad groups of AI that are considered high-risk:

(i)  those intended to be used as a Safety Component[1] of a product, or is itself a product covered by EU harmonization legislation (listed in Annex II), and which are required to undergo a third-party conformity assessment.

(ii) stand-alone systems in eight areas:

a. Biometric identification and categorization of natural persons,

b. Management and operation of critical infrastructure,

c. Education and vocational training,

d. Employment, workers management and access to self-employment,

e.  Access to, and enjoyment of, essential private services and public services and benefits,

f.  Law enforcement, migration, asylum and border control management and

g.  Administration of justice and democratic processes.

The areas are listed in Annex III, which is a fixed list and will be updated under the criteria set out in the Regulation by the Commission but with some input from Member States.

High-risk AI is subject on the fulfilment of certain obligations. These are:

(i)       Risk management

(ii)      Data governance

(iii)     Technical documentation

(iv)     Record keeping (traceability)

(v)      Transparency and provision of information to users

(vi)     Human oversight

(vii)    Accuracy

(viii)   Cybersecurity robustness.

It is beyond the scope of this paper to go into detail on each of these requirements. However, we set out some ideas on three of these below:

Risk management systems need to be established, implemented, documented and maintained. It is a continuous, interactive process running throughout the entire product lifecycle, requiring regular and systematic updating. The intended purpose of the high-risk AI system and the risk management system itself shall be considered when assessing compliance with the requirements set forth in the draft Regulation.

The requirement of data governance is key, not only because without data there is no AI and because the data used for training, validation and testing will define the legality (and the ethics) of the AI system. More importantly, perhaps, infringing the data governance requirement triggers the highest sanction set out in the draft Regulation (see comments in Q7 for further details on penalties). The draft Regulation sets out several characteristics of data governance, mandating it shall include certain quality criteria (impacting design choices, collection, formulation of assumptions, prior assessment, examination in view of possible bias and data gaps) and technical limitations. On the other hand, to the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preservation measures, such as pseudonymization, or encryption where anonymization may significantly affect the purpose pursued (i.e., GDPR compliance).

On the cybersecurity robustness criteria, it is important to highlight the presumption of compliance for high-risk systems when they are certified or when a statement of conformity has been issued under a cybersecurity scheme pursuant to the Cybersecurity Act. More generally, for easing the compliance burden, there are instruments to achieve presumption of conformity with the requirements, including adherence to standards, common specifications, cybersecurity schemes, conformity assessments, certificates and EU declarations of conformity.

[2] Safety Component: physical or digital component, including software, of machinery which serves to fulfil a safety function and which is independently placed on the market, the failure or malfunction of which endangers the safety of persons but which is not necessary in order for the machinery to function or may be substituted by normal components in order for the machinery to function (Draft Regulation on Machinery Products).
ey-chapter-5
(Chapter breaker)
5

Question 5

How should organizations comply when developing AI systems?

There are different milestones across the lifecycle of an AI system, with differing compliance input requirements at each milestone

Before placing an AI system on the market, for example, the provider must ensure that the intended purpose and risk management systems are considered. From the outset of product development, the provider must set up an adequate data governance model (training, validation and testing of datasets), together with drawing up technical documentation and information/instructions for users. In addition, the provider must perform an ex ante conformity assessment or equivalent, bearing the CE marking of conformity and registration in the EU database.

Once the AI is operative, it will be necessary to carry out post-market monitoring, establish an incident reporting/management system, and conduct new conformity assessments when any change occurs in the intention or function of the AI. Logs must be kept for as long as the purpose and national laws make it necessary, and documentation must be kept for 10 years after placing the AI on the market for traceability and accountability purposes.

ey-chapter-6
(Chapter breaker)
6

Question 6

How about AI systems which continue to “learn”?

There are some specific references across the wording of the draft Regulation to systems that continue to learn after being placed in the market or put into service

Those shall be developed in such a way to ensure that potentially biased outputs are duly addressed with appropriate mitigation measures.

The draft Regulation clarifies that, for self-learning AIs, changes derived from the self-learning won’t be considered a substantial modification and, consequently, won’t trigger the need for a new conformity assessment, as long as changes to the high-risk AI system and its performance have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation.

ey-chapter-7
(Chapter breaker)
7

Question 7

What if don’t comply?

The risk of being non-compliance

Non-compliance with the obligations exposes an organization to financial penalties of up to a maximum of €30m, or up to 6% of total annual global turnover for the preceding financial year, whichever is higher. On the lower end of the scale, penalties may be a minimum of €10m, or up to 2% of total annual global turnover, with an intermediate amount of €20m or up to 4% of total annual global turnover.

The infringements which attract the maximum sanction are reserved for those that do not respect the category of prohibited AI systems/practices or do not comply with the data governance requirements. The second tier of sanctions will be imposed for non-compliance with any other requirements or obligations under the draft Regulation. Finally, the substantial but less severe penalties are likely to be imposed for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.

For deciding the amount of the fine, all relevant circumstances shall be considered, and due regard shall be given to:

(i)    the nature, gravity and duration of the infringement and consequences,

(ii)   whether administrative fines have been already applied by other authorities to the same operator for the same infringement,

(iii)    size of the market share of the operator committing the infringement.

ey-chapter-8
(Chapter breaker)
8

Question 8

Enforcement, new authorities and EU database for stand-alone high-risk AI systems

New specialized authorities will be appointed and established to enforce the draft Regulation

At national level, national supervisory authority, the notifying authority and the market surveillance authorities will be designated. National authorities will be competent for coordination and representing the Member State at the European Artificial Intelligence Board, for designating and monitoring notified bodies (conformity assessment bodies), for monitoring compliance of operators (with previous verification from notified bodies when third-party conformity assessment is mandatory), and for sanctioning in cases of non-compliance. Where third-party conformity assessments are required, such conformity assessments shall not be performed by authorities but by notified bodies, designated by a notifying authority, in accordance with the draft Regulation and other relevant EU harmonization legislation.

At the EU level, the Commission plays a key role in amending the draft Regulation by way of delegated acts and setting standards by implementing acts. Further, a new European Artificial Intelligence Board will be established (composed by representatives of the Commission, one member from each of the 27 national competent authorities and the European Data Protection Supervisor). The European Union Agency for Cybersecurity, ENISA, may be involved regarding cybersecurity topics. Finally, a European database for high-risk AI will also be established by the Commission in collaboration with the Member States. The EU database for stand-alone high-risk AI systems shall contain and make public certain information concerning high-risk AI systems which have to be registered before they are placed on the market.

While remaining subject to confidentiality obligations like any regulatory authority, it is worth noting that supervisory authorities (and notified bodies) may have access to bias assessments, data, documentation, intellectual property, confidential information and trade secrets (including source code).

ey-chapter-9
(Chapter breaker)
9

Question 9

Is the draft Regulation intended to comprise all the issues relating to AI?

The Regulation is horizontal, with a cross-sector approach, but sector-specific regulations may include specificities for different verticals

Some of the challenges posed by AI were the subject of resolutions passed by the EU Parliament in October 2020. Liability, IP rights and certain key ethical issues are not addressed by the draft, despite, for example, ethical issues being extensively emphasized in the recitals to the draft Regulation.

The draft Regulation does not affect the application of the liability rules of intermediary service providers in the EU’s forthcoming Digital Services Act[1] (replacing the E-Commerce Directive), which will introduce obligations related to transparency and audit of algorithmic systems for organizations which connect consumers with products, services and media content. For that reason, the draft Regulation does not refer to the algorithms used in online platforms and the information society ecosystem. It is possible that some algorithms used in programmatic advertising might be prohibited as manipulative or exploitative practices. But, as noted above, that would need an amendment to the list of prohibited AI systems as these are currently not included.

Organizations should be in no doubt that data protection regulations, competition law and consumer law will complement the draft Regulation, together with sector-specific legal frameworks.

ey-chapter-10
(Chapter breaker)
10

Question 10

Similarities between the Regulation and GDPR

The Regulation is without prejudice to, and complements, the EU’s GDPR

As mentioned above, the Regulation is without prejudice to, and complements, the EU’s General Data Protection Regulation (GDPR). In addition, there are many similarities between both Regulations, which suggests that the Commission intends to achieve a similar objective with this new legal instrument: Setting a global standard or benchmark embedding the respect for fundamental rights.

Other examples of parallels between the proposed Regulation and GDPR include:

(i)    extraterritorial scope: the requirements and obligations apply to providers and users of AI systems in the EU, regardless of whether AI systems are located in or outside the EU,

(ii)   the penalty scheme for infringements is also similar, though the Regulation proposes an increase compared to the GDPR,

(iii)  the methodology includes ex ante self-assessments (third-party assessments where AI systems are intended to be used for remote biometric identification of persons) to check conformity with the requirements as well as continuous monitoring throughout the lifecycle of the AI system,

(iv)  accountability obligations demand that operators keep records and documentation proving compliance for a set retention period,

(v)   providers located outside the EU, but marketing AI within the EU need to appoint an EU-based legal representative, who may be liable for breaches of the Regulation

(vi)  the requirements must be implemented by design, ex ante, for high-risk AI and liability commences from the initialization of the process,

(vii)  once the Regulation enters into force, there will be a 24-month moratorium period to enable readiness,

(viii) the framework pivots around the cornerstone of the purpose behind the AI system: this purpose will determine compliance with respect to transparency about the intention of the AI,

(ix)  a risk-based approach to technical and organizational measures,

(x)   there is flexibility for organizations on the path or technical solutions to achieve compliance. The way to meet the requirements is open, taking into account the state of the art.

[3] Brussels, 15.12.2020 COM (2020) 825 final 2020/0361 (COD) Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC. 
ey-chapter-11
(Chapter breaker)
11

Question 11

Important dates to consider

Some key days to pay attention to

Stakeholders and interested parties can provide comments on the draft Regulation directly to the Commission by 22 June 2021. The legislative process could still take more than a year until the Regulation is enacted, though the EU institutions would like to expedite the process as much as possible, acknowledging that otherwise it will lose its purpose.

Once it is enacted, it will enter into force 20 days after publication and will have a moratorium period of two years before it is fully applicable. In practice, it means that organizations will have 24 months for AI obligations readiness during which they will not be subject to penalties, though they will need to ensure compliance.

ey-chapter-12
(Chapter breaker)
12

Question 12

What should organizations do before the Regulation is passed?

A regulation is a legislative instrument that, in the EU system, does not need implementation into national laws or further development to be binding

The AI Regulation will start being mandatory once it is in force, after publication, during the moratorium period (see comments in Q11). That means that supervisory authorities will be able to request proof of compliance (i.e. documentation, logs) from the very first day of applicability.

Until the legislative process finishes and the Regulation is enacted, the current wording of the proposal will be subject to change. But despite final amendments, the key message will remain: certain AI uses will be forbidden, while others will be considered high-risk and subject to strict regulatory requirements. Thus, considering that AI is increasingly used by organizations in different areas of activity (e.g., HR, big data, bots or security), the first thing to do is to make sure that the relevant stakeholders are aware of the forthcoming AI framework. Secondly, after assessing the qualification and classification of the AI system (determining the applicability of the draft AI Regulation), creating an inventory and mapping the AI currently used (either directly or via third party tools) is crucial in order to plan the AI and data strategy and avoiding a last-minute rush or being exposed to non-compliance risks. Part of the mapping exercise will be to better understand the organization’s position in the value chain, as that is key to ring fence liability and only assume the consequences of intentional modifications of the purpose or the AI itself. Proper awareness of the AI implemented, or planned to be implemented, will be the basis for building a bespoke AI and data governance model. Once AI uses and purposes are properly located, a gap analysis followed by a risk assessment can be initiated, though it is likely that such gap analysis will need to be updated once the final text is published.

Even without the draft Regulation, organizations need to think about their digitization strategy, including where AI tools currently underpin key services, products and operations.  That exercise will provide the basis for AI and data governance policies (including ethical, technical and legal angles) and the implementation of AI-specific risk management to support a ‘compliance-by-design’ approach.

ey-chapter-13
(Chapter breaker)
13

Question 13

Conclusion

Act as soon as possible

It is important to understand the Regulation and the impact it will have on organizations, particularly if they do not currently have a register or inventory of all AI tools and processes.

EY teams providing legal, technical, public policy and governance advice are working together to deliver a comprehensive solution for clients, which promotes best practice. Organizations which do not move to classify their AI currently in scope, as well as that being produced or procured as part of any technology road map are running the risk that they will not have time to comply.

Summary

The European Commission has taken a step forward in its strategy to achieve a trustworthy AI environment in the EU, proposing legislation covering the supply and use of AI in the form of a regulation. The Regulation will apply to the AI used or placed on the European Union (EU) market, irrespective of whether the providers are based within or outside the EU. It is important to understand the Regulation and the impact it will have on organizations, particularly if they do not currently have a register or inventory of all AI tools and processes.

About this article

By Peter Katko

EY Global Digital Law Leader

Leader in digital law, providing legal approaches for designing digital transformation in a legally compliant way.