5 minute read 22 Jul 2021
Female helicopter mechanic examining wires

Five early reflections on the EU’s proposed legal framework for AI

By Ruchi Bhowmik

EY Global Public Policy Vice Chair

Action-oriented public policy strategist. Seeks to earn and maintain EY’s seat at the policy discussion table. Geopolitical and macroeconomic junkie. Will Ferrell fan. Buoyed by family and friends.

5 minute read 22 Jul 2021

The European Commission’s proposed legal framework on artificial intelligence (AI) presents an important milestone in the global race to AI.

In brief
  • Mitigating risk to the health, safety, and fundamental human rights of individuals is central to the EU’s regulatory approach on AI.
  • Regulators should focus more on the “actual” rather than “intended” use of AI.
  • The EU’s approach could shape the direction of AI regulation for years to come, but a holistic approach to AI regulation is needed.

As the use of AI accelerates around the world, policymakers are asking questions about what frameworks should guide the design and use of AI, and how it can benefit society. The EU is the first institution to take a major step to answer these questions through a proposed legal framework for AI released on 21 April 2021. In doing so, the EU is seeking to establish a safe environment for AI innovation and to position itself as a leader in setting “the global gold standard” for regulating AI.  

The following are five early reflections:

1. The focus is on “future proof” regulation and the intended uses of AI

This is a positive aspect of the proposal as AI is a broad set of technology, tools and applications. Shifting the focus away from AI technology, which can have significantly different impacts depending on the application for which it is used, helps to mitigate the risk of divergent requirements for AI products and services. This in turn creates a favorable environment for future-focused AI innovation. In addition, a consistent and well-resourced implementation of the proposed market surveillance authority will be crucial to the successful future proofing of the AI regulation.

2. It’s all about risk

The proposals take a product-safety inspired approach in areas such as risk management systems and cybersecurity. The framework includes AI risk categories with proportionately scaled conformity requirements (unacceptable, high, limited, minimal). It also creates the European Artificial Intelligence Board, as a new oversight body to review and recommend updates to the lists of unacceptable and high-risk AI systems. Ethical areas such as organizational governance (e.g., ethical oversight boards) and decision-making procedures (e.g., stakeholder consultations), which were part of the European Commission’s High-Level Expert Group’s “ethics guidelines for trustworthy AI” are not included in the obligations for providers of high-risk AI systems. These may become part of future legislative proposals such as the anticipated proposal on adaptations to the EU and national liability framework.

3. Gaps exist in the monitoring of AI system effectiveness

The proposals do not provide a methodology for dealing with exaggerated claims of AI system capabilities or penalties for falsifying “intended use.” For example, organizations that use AI facial analysis-based emotion recognition technology do not have to provide evidence that their systems work. There is also no methodology for validating that the actual user of the AI system is the “intended user.” This could, for example, be problematic for AI technologies that are restricted to certain age groups (where the user cannot be a minor) or where the intended use and users are outside of the EU.

4. Countries outside of the EU need to recognize that this is just one piece of the puzzle

When assessing the merits of the European Commission’s proposals, it is important to recognize that this is just one component of the EU’s digital strategy and regulatory mosaic for a Europe in the Digital Age. Other related concerns to AI, such as data privacy, are deliberately avoided as they are addressed in other legislation (e.g., EU’s General Data Protection Regulations). Therefore, the stand-alone adaptability of the EU’s AI approach in other countries needs to be considered holistically.

5. Consideration needs to be given to how the proposal will impact organizations providing digital services globally

While the extra-territorial elements of the proposed regulation are consistent with the requirements of a “level-playing field” for business operations within the EU, they inevitably pose challenges for the global provision of digital services that are providing benefits to people and organizations both inside and outside of the EU. Global coordination through multi-lateral fora will be required to provide mutually recognized mechanisms for accrediting delegated bodies in non-EU countries. This approach will help to ensure efficient compliance assessments for companies based outside the EU.

Our view

The European Commission’s proposals represent an important milestone in the regulation of AI. EY believes that trust in AI is best achieved by ensuring the adoption of globally consistent principles for the design of AI system governance, risk management and controls – which in turn enables a greater focus on desired AI behaviors and outcomes.

We welcome the European Commission’s focus on risk mitigation, but caution that the lack of focus on organizational governance and decision processes fails to capture the important link between AI ethics and the broader social and governance components necessary for responsible business. In our experience, building and maintaining ethics in AI within an organization’s governance framework is critical to mitigating risks and sustaining trust.

 

In this short film, Eva Kaili, Chair of Science and Technology at the European Parliament observes that over time, policymakers will have a better understanding of how they should intervene on AI, and that the immediate focus should be to address high-risk AI applications. Kaili also highlights the importance of collaboration between the private sector, governments and regulators to align ethical AI frameworks that are relevant from a business standpoint as well as globally applicable.

This is part of a series of short films offering a deeper dive into the perspectives from our Board Imperative Series film, which features Eva Kaili alongside Reid Blackman Ph.D., Founder and CEO of Virtue Consultants and John Thompson, Chair of the Board at Microsoft. The series, facilitated by EY’s Center for Board Matters, explores the link between deploying trusted AI and delivering long-term value.

Summary

The European Commission’s proposals on AI represent several unique opportunities and risks. EY will watch closely as these proposals take shape and what they mean for jurisdictions outside of the EU.

About this article

By Ruchi Bhowmik

EY Global Public Policy Vice Chair

Action-oriented public policy strategist. Seeks to earn and maintain EY’s seat at the policy discussion table. Geopolitical and macroeconomic junkie. Will Ferrell fan. Buoyed by family and friends.