3 minute read 30 Jul 2021

Biometric systems in the forthcoming EU Artificial Intelligence Act

3 minute read 30 Jul 2021
Related topics Law Digital AI

Will be treated according to their risks and taking in to account their purposes, features and the time and location in which they are used.

On 21 April 2021 the European Commission presented its legislative proposal for a Regulation on Artificial Intelligence (the Proposal), introducing a harmonized regulatory framework intended to enable the development of trustworthy AI across the European Union (EU).

The Proposal, therefore, lays down uniform rules for the development, placement on the market and use of AI systems in conformity with Union values.

Following a risk-based approach, AI systems are classified in four categories (prohibited, high-risk, limited risk and minimal risk) depending on the threat posed to user safety and fundamental rights, and which are subject to different levels of regulatory intervention in accordance with the relevant risks.

On this basis, biometric AI systems (AI systems using biometric data[1], such as facial recognition) are treated differently throughout the Proposal according to their risks and taking in to account their purposes, features and the time and location in which they are used.

Prohibited biometric AI systems

The list of forbidden AI practices (those that create unacceptable risks as contravening EU values) includes the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (except for three exceptional limited situations where its use is deemed justified for reasons of substantial public interest[2]).

These systems are considered to be highly intrusive because, as expressly laid down in the Proposal, they “may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.”

Consequently, the ban on the use of biometric systems will apply when the following circumstances occur:

i. The systems are used to identify an individual, comparing his/her biometric data with biometric data included in a reference database, and without prior knowledge of the user of the AI system whether the person will be included in the database and can be identified.

Thus, the prohibition only affects biometric identification systems (which aim to identify an individual among a group, comparing his/her data to those of the each individual in the group), while biometric authentication/verification systems (which aim to prove the identity claimed by an individual, comparing the latter’s data with the data of the claimed identity) will be allowed[3].

ii.  The identification is remote, made at a distance.

iii. The capturing of biometric data, the comparison and the identification occur in real time, without a significant delay (this covers instant identification and limited short delays).

On the contrary, “post” remote biometric identification (i.e. after the biometric data has been collected and with a significant delay) will be permitted.

iv. The systems are used in publicly accessible spaces, meaning physical places accessible to the public, regardless of whether the place is privately or publicly owned and whether certain conditions for access apply (e.g. streets, government premises, transport infrastructure, cinemas, theatres, shops, shopping centres, etc.).

Accordingly, private places which are generally not accessible for third parties, as well as online spaces are not covered by the prohibition.

 v. The systems are used for law enforcement purposes.

Thus, their use by other public authorities or private actors for other purposes will not be forbidden.

High-risk biometric AI systems

In light of the above, biometric AI systems other than those expressly prohibited under the Proposal will be permitted.

However, on the basis that they pose significant risks to the health and safety or fundamental rights of persons, certain of those systems qualify as high-risk; specifically, AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons.

These systems must, therefore, comply with several specific requirements which apply to high-risk AI systems (risk management, data governance, technical documentation, record keeping, transparency and information to users, human oversight, accuracy, cybersecurity and robustness).

In addition, stand-alone high-risk AI systems must also follow ex ante conformity assessment procedures (i.e. before they are placed on the EU market) but, while the conformity assessment must in general be carried out by the provider under its own responsibility, remote biometric identification systems will be subject to third party conformity assessment.

It is also worth noting that emotion recognition systems (as defined below) are classified as high-risk only to the extent they are used by law enforcement authorities and by competent authorities in the fields of migration, asylum and border control management.

Other biometric AI systems

Considering that they pose specific risks of impersonation or deception, the Proposal imposes some transparency obligations for AI systems intended to interact with natural persons (e.g. chatbots) , emotion recognition systems[4] and biometric categorisation systems[5], as well as for AI systems used to generate or manipulate image, audio or video content (“deep fakes”).

  • As regards AI systems intended to interact with natural persons, individuals must be informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
  • Concerning emotion recognition systems and biometric categorisation systems, individuals must be made aware of the operation of the system they are exposed to.
  • In case of deep fakes, they must be informed that the content has been artificially created or manipulated, and the AI output must be labelled accordingly.

These transparency obligations are applicable regardless of whether the specific system qualifies as high-risk or not. Thus, if qualifying as a high-risk system, the specific requirements and obligations for the latter must also be complied with.

Furthermore, it must be noted that certain exceptions for the purposes of law enforcement, freedom of expression and freedom of the arts and sciences apply.


The reactions to the proposed set of rules for biometric AI systems did not take long after the Proposal was published.

For instance, in June 2021 the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) released a Joint Opinion on the Proposal[6] where they claim for a general full prohibition of any use of AI for automated recognition of human features in publicly accessible spaces (e.g. recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals) in any context, as it poses extremely high risks to fundamental rights and freedoms.

They also recommend to prohibit (i) AI systems using biometrics to categorize individuals into clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited; and (ii) AI systems used to infer emotions of natural persons, except for certain limited cases (e.g. for health or research purposes) subject to the adoption of appropriate safeguards.

On the other hand, the European Parliament has expressed concerns in this regard, emphasizing the need of strong safeguards for the use of AI in law enforcement. In a draft report to be debated and discussed in September, it proposes a permanent ban on the use biometric data (gait, fingerprints, DNA, voice) for the purpose of recognizing people in publicly accessible spaces, as well as on the use private facial recognition databases by law enforcement. Likewise, MEPs highlight that facial recognition should not be used for identification until such systems comply with fundamental rights, and that automated recognition-based systems should not be used for border control purposes.

Several human rights movements and organizations also consider that the Proposal is insufficient, considering that only certain biometric mass surveillance practices are restricted and that it is against fundamental rights and freedoms.

Considering that the legislative process has just started, the current text of the Proposal is subject to amendments, so it remains to be seen whether the final version maintains the current limitations and obligations, or the use of AI biometric systems is subject to further restrictions.

Potential impact of the Proposal in practice: frequently asked questions on use cases

Organizations are scrutinizing the impact of the Proposal on commonly used biometric applications.

Frequent questions arise on the use of biometrics within customer onboarding processes. It is important to bear in mind that those processes involve authentication of individuals (proof of identity) and not identification (determining the identity of a person). Thus, customer onboarding does not fall into neither the prohibition nor the high-risk group of AI.

Video surveillance combined with AI raises more intricacies. As mentioned, when the purpose is not law enforcement, it is permitted. Assessing whether it falls within the high-risk group or not depends on whether it is done remotely and for the purpose of identifying individuals.

The use of biometric data for building digital avatars (i.e. online shopping and size recommendations, online gaming), as long as they do not intend to identify a person but to categorize him/her (assigning the person to a specific category based on his/her biometric data), would be subject to transparency and information obligations.

Obviously, all the above must be assessed in light of other features, uses or intended purposes that could fall in other prohibited or high-risk categories, and very importantly, complemented with GDPR advice.  

Article written by Sofía Fontanals and Blanca Escribano - EY Law, Spain

  • Show article references

    [1] Biometric data is defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data”.

    [2] These situations are: (i) search for potential victims of crime, including missing children; (ii) prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; and (iii) detection, localisation, identification or prosecution of perpetrators or suspects of certain criminal offences. Among other conditions, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of these objectives will be subject to (i) authorisation by a judicial authority or by an independent administrative authority of a Member State, which must be obtained prior to the use unless in duly justified cases of urgency; and (ii) appropriate limits in time and space.

    [3] See Joint paper of the Spanish data protection authority, Agencia Española de Protección de Datos (AEPD), and the European Data Protection Supervisor (EDPS) on 14 misunderstandings with regard to biometric identification and authentication. In addition, according to the Article 29 Data Protection Working Party “Opinion 3/2012 on developments in biometric technologies” (00720/12/EN - WP193), biometric identification is the process of comparing biometric data of an individual (acquired at the time of the identification) to a number of biometric templates stored in a database (i.e. a one-to-many matching process); while biometric authentication/verification is the process comparing the biometric data of an individual (acquired at the time of the verification) to a single biometric template stored in a device (i.e. a one-to-one matching process).

    [4] Emotion recognition systems are defined as AI systems for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.

    [5] Biometric categorisation systems are defined as AI systems for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data.

    [6] EDPB-EDPS Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).


These systems will be treated according to their risks and taking in to account their purposes, features and the time and location in which they are used

About this article

Related topics Law Digital AI