These groups are:
- High risk,
- Low risk, and
- Minimal risk.
The severity of the regulatory approach depends on the classification of the AI in question. The proposed legislation sets out a regulatory structure that bans some uses of AI, heavily regulates high-risk uses and lightly regulates less risky AI systems.
Organizations should be aware that the Commission intends to prohibit certain uses of AI which are deemed to be unacceptable because they:
(i) Deploy subliminal techniques or exploit vulnerabilities of specific groups of persons due to their age or disability, in order to materially distort a person’s behavior in a manner that causes physical or psychological harm;
(ii) Lead to ‘social scoring’ by public authorities, or
(iii) Conduct ‘real time’ biometric identification in publicly available spaces (with some derogations).
At the other end of the scale, for lower or minimal risk AI systems, the framework indicates that there are unlikely to be major restrictions or none at all. In between, high-risk AI is permitted subject to compliance with certain requirements; and non-high-risk AI (i.e., impersonation, bots), which is permitted subject to information and transparency obligations. This might include making transparent to humans that they are interacting with an AI system and that emotion recognition or biometric categorization is applied, as well as labeling so-called ‘deep fakes’ (with some exceptions).
Due to this risk-based approach, most of the obligations and requirements outlined in the draft Regulation refer to high-risk AI. The classification of an AI system as high-risk is based on its intended purpose (in line with product safety legislation). It does not only depend on the function performed but also on the specific purpose and modalities for which the system is used.
There are two broad groups of AI that are considered high-risk:
(i) those intended to be used as a Safety Component of a product, or is itself a product covered by EU harmonization legislation (listed in Annex II), and which are required to undergo a third-party conformity assessment.
(ii) stand-alone systems in eight areas:
a. Biometric identification and categorization of natural persons,
b. Management and operation of critical infrastructure,
c. Education and vocational training,
d. Employment, workers management and access to self-employment,
e. Access to, and enjoyment of, essential private services and public services and benefits,
f. Law enforcement, migration, asylum and border control management and
g. Administration of justice and democratic processes.
The areas are listed in Annex III, which is a fixed list and will be updated under the criteria set out in the Regulation by the Commission but with some input from Member States.
High-risk AI is subject on the fulfilment of certain obligations. These are:
(i) Risk management
(ii) Data governance
(iii) Technical documentation
(iv) Record keeping (traceability)
(v) Transparency and provision of information to users
(vi) Human oversight
(viii) Cybersecurity robustness.
It is beyond the scope of this paper to go into detail on each of these requirements. However, we set out some ideas on three of these below:
Risk management systems need to be established, implemented, documented and maintained. It is a continuous, interactive process running throughout the entire product lifecycle, requiring regular and systematic updating. The intended purpose of the high-risk AI system and the risk management system itself shall be considered when assessing compliance with the requirements set forth in the draft Regulation.
The requirement of data governance is key, not only because without data there is no AI and because the data used for training, validation and testing will define the legality (and the ethics) of the AI system. More importantly, perhaps, infringing the data governance requirement triggers the highest sanction set out in the draft Regulation (see comments in Q7 for further details on penalties). The draft Regulation sets out several characteristics of data governance, mandating it shall include certain quality criteria (impacting design choices, collection, formulation of assumptions, prior assessment, examination in view of possible bias and data gaps) and technical limitations. On the other hand, to the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preservation measures, such as pseudonymization, or encryption where anonymization may significantly affect the purpose pursued (i.e., GDPR compliance).
On the cybersecurity robustness criteria, it is important to highlight the presumption of compliance for high-risk systems when they are certified or when a statement of conformity has been issued under a cybersecurity scheme pursuant to the Cybersecurity Act. More generally, for easing the compliance burden, there are instruments to achieve presumption of conformity with the requirements, including adherence to standards, common specifications, cybersecurity schemes, conformity assessments, certificates and EU declarations of conformity.
 Safety Component: physical or digital component, including software, of machinery which serves to fulfil a safety function and which is independently placed on the market, the failure or malfunction of which endangers the safety of persons but which is not necessary in order for the machinery to function or may be substituted by normal components in order for the machinery to function (Draft Regulation on Machinery Products).