A multicultural group of business people gathers around a tablet, brainstorming

Assurance in an AI world

Building trust through assurance in an AI-driven world

An AI-driven world requires a comprehensive approach to assurance services, building a foundation of trust and transparency.


In brief
  • Artificial intelligence (AI) is changing the game for assurance, expanding the need to address risk and transparency in decision-making.
  • Governance, fairness, data sources, and security are critical focal points for new assurance services around AI technology.
  • Robust assurance practices are foundational to building trust in AI, allowing firms to reap the benefits and mitigate risks.

Artificial intelligence (AI) is poised to have a transformational impact on the accounting profession. Leaders and practitioners alike see opportunities but also challenges as they assess pilot programs, experiment with lower-risk use cases and calculate the potential return. From automated transaction analysis to AI-powered risk assessment, the efficiency and accuracy requirements are significant. Emerging agentic AI tools hold promise for fully automating rote tasks, with appropriate human oversight and governance. However, the very systems that promise to streamline accounting and auditing processes also introduce new risks.

Success in this new paradigm requires new services, to help deliver transparency and independent verification. The AI regulatory environment is unclear in areas such as internal control over financial reporting, causing some hesitance in deployment. Regulatory oversight and public expectations will drive demand for robust AI-related assurance. In other words, there’s a growing need for assurance over the very technologies that are changing assurance.

How technology is changing the assurance landscape

Organizations are at various stages of exploring and leveraging AI. Some are using machine learning or testing generative AI (GenAI) and automated decision systems to manage complex processes. Others leverage AI to make predictions and automate tasks across their core business processes, optimize supply chains and enhance customer service. Many of these same organizations also envision a future where AI agents work autonomously across the enterprise.

 

As companies adopt AI tools that impact financial reporting and internal controls, audit committees seek validation about the reliability, compliance and governance of these systems. The focus on accuracy and appropriate human oversight must be balanced with the enthusiasm to accelerate adoption. The need for transparency and responsible behavior from AI systems, particularly in sensitive domains such as employment, lending and health care, and a demonstrated return on AI investments are among the core requirements for the deployment of AI systems. With the imperative to understand the origin of data sources and how models both generate content and make decisions, explainable AI (XAI) technologies will play a growing role in assurance.

 

Emerging assurance services

As technologies mature and regulatory expectations evolve, innovation and experience will give rise to several categories of new assurance services. External assurance professionals are well positioned to help lead this evolution, as they can harness AI within the backdrop of risk-based frameworks, professional skepticism, and independence to emerging risk areas where trust is still being built. Ernst & Young LLP (EY US) has its own experience in deploying responsible AI to help transform its Assurance practice with quality at its foundation, which positions it well to guide clients through the emerging risks introduced by AI where trust and confidence are paramount.

Transaction-level assurance

Transactional-level assurance uses real-time analytics to test transactions for issues such as duplicate payments, missing approvals or segregation-of-duties violations. With live dashboards and exception alerts, these systems flag issues before they result in material misstatements or operational losses.

 

Assurance for new AI platforms

As businesses increasingly rely on proprietary or third-party AI systems, they will need independent evaluations of how those systems are developed, tested and maintained with effective oversight. Assurance addresses these needs with a focus on evaluating the strength of an organization’s governance model, verifying AI model objectives, evaluating the reliability of data sources, and helping to establish robust training and validation procedures. Stakeholders will expect transparency into how models are built and deployed, including independent verification that they operate in compliance with applicable regulations consistently and within defined performance thresholds.

 

Third- or fourth-party AI service provider assurance

As companies adopt cloud-based AI tools, such as large language models (LLMs), analytics platforms or AI-as-a-service offerings, they may seek independent verification that those vendors follow appropriate security, privacy and governance practices.

Monitoring existing AI systems

For existing AI systems, assurance must focus on ongoing monitoring to help validate that these systems continue to perform as intended. This includes evaluating the reliability of data sources, monitoring for drift and conducting regular assessments of model performance. Leveraging the assurance professionals’ core skills in process documentation, internal control testing and evidence-based verification is essential to maintaining the integrity and effectiveness of AI systems over time. Stakeholders will demand continuous oversight to help safeguard that AI structures adhere to compliance standards and operational benchmarks.

 

Data governance and lineage

Financial data is an untapped opportunity to provide higher real-time value. Regulators and stakeholders want to know where critical data originates, how it is transformed, and whether it is accurate and complete. An independent attestation that validates the traceability of data through an organization’s systems is especially important for data used in financial reporting or regulatory disclosures.

 

Algorithmic audits

Applied methodologies can help to objectively assess whether AI systems produce reliable outcomes, whether caused by the data or model, and provide a framework for management to deploy AI with confidence.

 

AI-enhanced processes and systems

As manufacturing, logistics and infrastructure systems begin to integrate with physical AI and related technologies, assurance professionals will be called upon to evaluate AI’s role in existing processes, the risks AI introduced, and verify the accuracy and integrity of the data and the outcomes produced by the AI from that integration.

 

Cybersecurity program audits

AI data stores, which may include confidential and personal data, represent a primary target for hackers and the introduction of third-party AI platforms or data sources create new areas of exposure. At the same time, hackers are leveraging AI for more sophisticated attacks, deploying AI to learn and iterate, finding the right moment and method to execute. This raises the stakes on how companies must protect themselves and requires an expanded scope and rigor of assurance services.

 

Agentic AI

Agentic AI raises the stakes for AI’s potential and the need for assurance over its use. While the use of AI demands both an understanding of the data source and transparent decision-making, agentic AI introduces autonomy of decision-making, in essence digital employees empowered with tools to execute business processes. For example, an LLM might provide bad information, leading a human to make a bad decision, while an AI agent will be empowered to make the decision without any human intervention. Assurance requires not just an approach to consider what could happen, but a real-time framework to monitor how this digital workforce operates, setting up human or IT guardrails for an agentic operating model and how to prevent a bad decision in advance.

 

Conclusions

In an AI-driven world, the role of assurance is evolving to meet the complexities introduced by new technologies. As organizations increasingly rely on AI for critical functions, the demand for independent verification of AI systems’ governance, compliance and security becomes paramount. By leveraging advanced methodologies, existing AI systems can be monitored and data governance can be validated and improved. Ultimately, robust assurance practices will foster trust in AI technologies, enabling organizations to harness their full potential while mitigating associated risks.

Summary 

The role of assurance service is evolving for AI, emphasizing the need for governance, fairness and security. As organizations increasingly adopt AI for critical functions, there is a growing demand for independent verification of AI systems. Transparency, ethical behavior, data governance, and risk are all critical areas of focus. Ultimately, robust assurance practices are essential to fostering trust in AI technologies and mitigating associated risks.

About this article

Authors

Related articles

2026 audit committee priorities

Navigate 2026 audit committee priorities—risk, reporting, tax and regulation insights to help boards oversee risks and help and drive strategy.

Establishing practical AI governance for compliance and legal

Dos and don’ts of establishing AI governance frameworks that balance AI innovation with safety, reliability and legal standards. Read more.

Why technology implementations call for proactive risk assessments

Technology risk management practices help identify, address and mitigate IT risks. Learn more.