As the use of AI accelerates around the world, policymakers are asking questions about what frameworks should guide the design and use of AI, and how it can benefit society. The EU is the first institution to take a major step to answer these questions through a proposed legal framework for AI released on 21 April 2021. In doing so, the EU is seeking to establish a safe environment for AI innovation and to position itself as a leader in setting “the global gold standard” for regulating AI.
The following are five early reflections:
1. The focus is on “future proof” regulation and the intended uses of AI
This is a positive aspect of the proposal as AI is a broad set of technology, tools and applications. Shifting the focus away from AI technology, which can have significantly different impacts depending on the application for which it is used, helps to mitigate the risk of divergent requirements for AI products and services. This in turn creates a favorable environment for future-focused AI innovation. In addition, a consistent and well-resourced implementation of the proposed market surveillance authority will be crucial to the successful future proofing of the AI regulation.
2. It’s all about risk
The proposals take a product-safety inspired approach in areas such as risk management systems and cybersecurity. The framework includes AI risk categories with proportionately scaled conformity requirements (unacceptable, high, limited, minimal). It also creates the European Artificial Intelligence Board, as a new oversight body to review and recommend updates to the lists of unacceptable and high-risk AI systems. Ethical areas such as organizational governance (e.g., ethical oversight boards) and decision-making procedures (e.g., stakeholder consultations), which were part of the European Commission’s High-Level Expert Group’s “ethics guidelines for trustworthy AI” are not included in the obligations for providers of high-risk AI systems. These may become part of future legislative proposals such as the anticipated proposal on adaptations to the EU and national liability framework.
3. Gaps exist in the monitoring of AI system effectiveness
The proposals do not provide a methodology for dealing with exaggerated claims of AI system capabilities or penalties for falsifying “intended use.” For example, organizations that use AI facial analysis-based emotion recognition technology do not have to provide evidence that their systems work. There is also no methodology for validating that the actual user of the AI system is the “intended user.” This could, for example, be problematic for AI technologies that are restricted to certain age groups (where the user cannot be a minor) or where the intended use and users are outside of the EU.
4. Countries outside of the EU need to recognize that this is just one piece of the puzzle
When assessing the merits of the European Commission’s proposals, it is important to recognize that this is just one component of the EU’s digital strategy and regulatory mosaic for a Europe in the Digital Age. Other related concerns to AI, such as data privacy, are deliberately avoided as they are addressed in other legislation (e.g., EU’s General Data Protection Regulations). Therefore, the stand-alone adaptability of the EU’s AI approach in other countries needs to be considered holistically.
5. Consideration needs to be given to how the proposal will impact organizations providing digital services globally
While the extra-territorial elements of the proposed regulation are consistent with the requirements of a “level-playing field” for business operations within the EU, they inevitably pose challenges for the global provision of digital services that are providing benefits to people and organizations both inside and outside of the EU. Global coordination through multi-lateral fora will be required to provide mutually recognized mechanisms for accrediting delegated bodies in non-EU countries. This approach will help to ensure efficient compliance assessments for companies based outside the EU.
The European Commission’s proposals represent an important milestone in the regulation of AI. EY believes that trust in AI is best achieved by ensuring the adoption of globally consistent principles for the design of AI system governance, risk management and controls – which in turn enables a greater focus on desired AI behaviors and outcomes.
We welcome the European Commission’s focus on risk mitigation, but caution that the lack of focus on organizational governance and decision processes fails to capture the important link between AI ethics and the broader social and governance components necessary for responsible business. In our experience, building and maintaining ethics in AI within an organization’s governance framework is critical to mitigating risks and sustaining trust.