Artificial intelligence (AI) is at, or at the very least approaching, an inflection point. Organisations and consumers are beginning to unlock the technology’s transformational potential. However, they are also grappling with how to manage its complexities, opacity and considerable risks.
While AI represents a generational opportunity for increasing productivity and innovation, it can also seem like a black box with minimal transparency and assurance of its effectiveness, governance and trustworthiness.
Concerns about AI are as widespread as its excitement. The EY 2025 AI Sentiment Index Study, a global survey of over 15,000 people, found that while 82% of consumers have chosen to use AI in the past six months, 58% of consumers are worried that organisations are failing to hold themselves accountable for negative uses of AI1. In Ireland, the EY Responsible AI Pulse Survey2 shows that every organisation surveyed has either adopted AI or plans to do so. Yet governance remains a challenge: 54% of CXOs admit it is difficult to develop frameworks for current AI technologies, and 46% believe their organisation’s approach to technology-related risks is insufficient to address emerging AI challenges.
Meanwhile, business leaders are asking how they can assess whether an AI system is safe and effective; how they should identify and manage its risks; and how to measure an AI system against governance and performance criteria.
A gap is opening. Not between those who have deployed AI and those who haven’t, but between those who want to go deeper and those who continue to dabble.
Assessments: The key to building trust in AI
Just as a safety inspection gives people confidence to step into an elevator, AI assessments give organisations the assurance to move forward. When trust is partial, progress is cautious and constrained, holding back potential. However, when trust is earned and confidence is high, you go all the way to the top. The same is true for AI. Credible, independent assessments turn hesitation into acceleration, enabling businesses to reach new heights of innovation, adoption, and value.
Rigorous assessments, whether voluntary or mandatory, can help to address key issues and ensure that AI is developed and deployed safely and effectively. The assessments can help to build the confidence and that is essential for AI’s potential to be maximised and for the risks associated with it to be minimised.
Effective AI assessments are vital for strong corporate governance. They help confirm that an AI system performs as intended. They ensure compliance with laws, regulations, and standards. They also verify that the system is managed according to internal policies and ethical principles.
However, while AI assessment frameworks are emerging that aim to address these concerns, the sheer number and variety of approaches is challenging.
EY has worked with the Association of Chartered Certified Accountants (ACCA)3 to identify the characteristics of effective AI assessments. Our review found a rapidly emerging assessment ecosystem that provides businesses with an opportunity to build and deploy AI systems that are more likely to be effective, safe and trusted.
Understanding global AI assessment frameworks
The EU is not alone in introducing AI legislation and regulation. According to the Organisation for Economic Co-operation and Development (OECD), as of January 2025, nearly 70 countries around the world had introduced over a thousand AI public policy initiatives. This includes legislation, regulation, voluntary initiatives and agreements. Many of the initiatives include AI assessments that are often referred to as “AI assurance” or AI audits.
Recent reports indicate that the EU is set to water down its landmark AI Act following pressure from major technology companies4. It underscores the importance of voluntary, robust AI assessments to maintain trust and accountability even as formal rules evolve.
Broadly speaking, these assessments can be grouped into three categories:
Governance assessments: To determine whether appropriate internal corporate governance policies, processes and personnel are in place to manage an AI system. This could include the system’s risks, suitability and reliability.
Conformity assessments: To determine whether an organisation’s AI system complies with relevant laws, regulations, standards or other requirements.
Performance assessments: To measure the quality of performance of an AI system in terms of accuracy, non-discrimination, reliability and so on.
Within these three categories, there can be quite extreme variations in the quality of assessments.
To assist organisations in selecting an appropriate assessment we recommend that all AI assessments include the following characteristics: