EU AI Act Maturity Assessment

The EU AI Act is now in effect and organizations using AI must comply with mandatory requirements for risk management, data governance, and human oversight. Our teams, including lawyers and world-class AI and regulatory professionals, have developed a comprehensive impact assessment that helps organizations understand where they are on their Responsible AI journey and what steps they need to take next.

Related topics

At its core, the AI Act proposes a three-tier model of risk classification in order to consider and remediate the impact of AI systems on fundamental rights and user safety:

  • Unacceptable risk: Systems with an unacceptable risk rating that are prohibited by the European Commission.
  • High risk: Systems with a high-risk rating that must comply with multiple requirements and undergo a conformity assessment.
  • Lower risk: Certain AI systems which do not meet the specified criteria for the other two tiers and still present limited risk are recommended to apply the same practices as high-risk AI systems and are subject to transparency obligations.

In a first step, companies should generally identify all AI applications used and rate the respective risks. Depending on the risk classification the AI system is subject to differing regulatory requirements. As the first class (unacceptable risk) is prohibited, and the last (lower risk) only needs to meet light-touch requirements, an AI framework needs to be geared towards the high-risk AI systems that are in use or planned for the future.

EY developed the AI Act Maturity Assessment to:

  • Help organizations navigate through the regulation’s requirements
  • Assess the use of AI systems and the extent to which the regulation applies
  • Support organizations in understanding where they stand regarding the regulation’s requirements and determine to what extent organizations are ready to comply with the regulation
  • Assess organizational maturity and determine areas of prioritized focus
  • Perform a deep dive on specific AI systems in view of the legal requirements set by the AI Act

As it is often more costly and complex to ensure compliance when AI systems are operating than during the design and implementation phase, we recommend that firms start preparing early. This includes setting up a register for all AI applications used in the organization, risk rating them and putting in place adequate:

  • AI governance, policies and design standards
  • Resource management
  • Risk and control framework
  • Data management
  • Secure architecture

Confidently implement Responsible AI with EY
Connect with us to explore how we can support your organization in navigating the EU AI Act 
and successfully implementing AI solutions.

Our latest thinking

EU AI Act Roadmap: What does the AI act mean for your organization?

The EU AI Act is coming soon. What does this mean and what steps should you take now?

How multidisciplinary collaboration on Responsible AI shapes a future with confidence

Discover insights from our EY AI Talks event on Responsible AI.

How EY is navigating global AI compliance: The EU AI Act and beyond

EY is turning AI regulation into a strategic advantage. Learn more in this case study.