Three business professionals of diverse backgrounds engage in a discussion while standing in a spacious, well-lit office hallway featuring modern architecture and greenery

How AI assessments can enhance confidence in AI

Well-designed AI assessments can help evaluate whether investments in the technology meet business and society’s expectations.


In brief
  • Business leaders, policymakers and investors seek clarity on the impact, reliability and trustworthiness of artificial intelligence (AI) systems.
  • Around the world, recent AI public policy initiatives are introducing assessment frameworks, which are often referred to as “AI audits” or “AI assurance.”
  • AI assessments, whether voluntary or regulatory, can help businesses evaluate AI performance, governance and compliance.

For business leaders, policymakers and the public, AI represents a generational opportunity for increasing productivity and innovation. But AI can also seem like a black box — with minimal transparency and assurance of its effectiveness, governance and trustworthiness.  And while AI assessment frameworks are emerging that aim to address these concerns, the sheer number and variety of approaches is challenging.

Written by EY professionals in collaboration with the Association of Chartered Certified Accountants (ACCA), this article explores the nascent field of AI assessments, identifies the characteristics of effective AI assessments and highlights key considerations for business leaders and policymakers. Our review finds a rapidly emerging assessment ecosystem that provides businesses with an opportunity to build and deploy AI systems that are more likely to be effective, safe and trusted. AI assessments — whether voluntary or mandatory — can increase confidence in AI systems. When well-designed, they can enable business leaders to evaluate whether the systems are performing as intended, inform effective governance and risk mitigation, and support compliance with any applicable laws, regulations or standards.  

The concerns about AI — like the excitement — are broad based. The EY 2025 AI Sentiment Index Study, a global survey of over 15,000 people, found that while 82% of consumers have chosen to use AI in the past six months, 58% of consumers are worried that organizations are failing to hold themselves accountable for negative uses of AI. Business leaders are asking how they can assess whether an AI system is safe and effective; how they should identify and manage its risks; and how to measure an AI system against governance and performance criteria.

AI assessments: enhancing confidence in AI

Effective assessments of artificial intelligence can support strong governance, compliance and performance.

Understanding the AI assessment landscape

As of January 2025, policymakers from nearly 70 countries have introduced over a thousand AI public policy initiatives, including legislation, regulation, voluntary initiatives and agreements, according to the Organisation for Economic Co-operation and Development (OECD). Many of these initiatives include various types of AI assessments.  These assessments can be broadly grouped into three categories:
 

  1. Governance assessments, which determine whether appropriate internal corporate governance policies, processes and personnel are in place to manage an AI system, including that system’s risks, suitability and reliability.

  2. Conformity assessments, which determine whether an organization’s AI system complies with relevant laws, regulations, standards or other policy requirements.

  3. Performance assessments, which measure the quality of performance of an AI systems’ core functions, such as accuracy, non-discrimination and reliability. They often use quantitative metrics to assess specific aspects of the AI system.
     

Even with these three emerging types of assessments, there can be significant variations in assessment quality. To address these shortcomings, we recommend that all AI assessments include the following characteristics:
 

  • Specificity about what is to be assessed and why: An effective AI assessment framework will have a clearly specified and articulated business or policy objective, scope and subject matter.

  • Clear methodology: Methodologies and suitable criteria determine how a subject matter is assessed, and it is essential that similar AI assessments use clearly defined and consistent approaches. Some assessments, for instance, may include explicit opinions or conclusions, while others may only provide a summary of procedures performed. Consistency, combined with clear terminology, allows users to compare assessment outcomes and understand how they were reached.

  • Suitable qualifications for those providing the assessment: The choice of assessment provider is crucial and directly influences the credibility, reliability and overall integrity of the process. Key considerations for selecting assessment providers include competency and qualifications, objectivity and professional accountability.

Next steps for business leaders

  • Consider the role AI assessments can play in enhancing corporate governance and risk management. AI assessments can help business leaders identify and manage evolving risks associated with their AI systems and help indicate whether AI systems perform as intended.

  • Evaluate whether – even in the absence of any regulatory obligations – to conduct voluntary assessments to build confidence in AI systems among employees, customers and other important stakeholders. Market dynamics, investor demand or internal governance considerations may make a voluntary AI assessment advisable to build confidence in a business’s AI systems. Moreover, if some AI systems are subject to regulatory obligations, business leaders may choose to use assessments to help measure and monitor compliance.

  • Where voluntary assessments are used, determine the most appropriate assessment. Business leaders will want to determine whether to conduct a governance, compliance and performance assessment, and whether it should be conducted internally or by a third party.

Summary

Business leaders and policymakers are considering the role AI assessments can play in supporting their AI governance objectives. While the current AI assessment landscape presents some clear challenges, a diverse group of stakeholders are working to address these issues. The development of this ecosystem is important. If properly designed and conducted in a careful and objective manner, AI assessments can help businesses assess the reliability of their AI systems and promote the trust and confidence in AI needed to realize its potential.

Related content

How responsible AI can unlock your competitive edge

Discover how closing the AI confidence gap can boost adoption and create competitive edge. Explore three key actions for responsible AI leadership.

How EY is navigating global AI compliance: The EU AI Act and beyond

EY is turning AI regulation into a strategic advantage. Learn more in this case study.

How integrity-first AI use today builds confidence for tomorrow

The EY Global Integrity Report 2024 reveals how integrity-first AI use today builds confidence for tomorrow. Read our findings.