Group of people using computers and making calculations, in the style of code-based creations. Generative AI.

How internal audit can govern AI risks and promote compliance

Internal audit functions must adapt to AI complexities while also fostering innovation and learning, agility and trust.


In brief
  • Internal audit faces challenges in managing AI risks, requiring a proactive approach to governance.
  • Chief audit executives should develop annual AI audit plans, educate teams on risks and integrate AI governance into frameworks to promote responsible use.
  • Collaboration with executive leaders and risk committees is essential for effective AI oversigh that operates at the speed of trust.

Artificial intelligence (AI) is accelerating change across every function, enabling new business models, faster decisions and greater automation. At the same time, AI risks are emerging in a nonlinear, accelerated, volatile and interconnected (NAVI) operating environment, where issues such as model failure, bias, data leakage or regulatory intervention can surface suddenly and cascade rapidly across the enterprise. In this context, governance cannot rely solely on periodic reviews or retrospective assurance.

Internal audit (IA) is uniquely positioned to help organizations operate at the speed of trust. Beyond providing assurance, chief audit executives (CAEs) are increasingly expected to help leadership make risk informed decisions about when, where and how AI should be deployed. This requires internal audit to evolve from a primarily compliance oriented role toward one that enables timely, confident decision making while preserving independence and rigor.

CAEs and internal audit functions face a tall order: to guard against risks from technologies that they likely don’t fully understand and to continue to evolve, without hamstringing functions that see AI and GenAI adoption as do-or-die imperatives. To stay ahead, internal audit must get up to speed on AI risks and controls to properly check and verify alignment and provide assurance that the use of the AI systems within the organization is responsible.

 

As AI capabilities scale and evolve, traditional calendar driven audit planning is increasingly misaligned with the velocity of risk. Instead of relying solely on annual AI audit plans, internal audit should adopt a rolling, trigger based approach to AI coverage. Triggers may include events such as the deployment of high impact AI use cases, significant model changes, expansion to new data sources, regulatory developments, third party AI adoption or sustained breaches of defined risk thresholds.

 

By defining these triggers in advance and agreeing proportionate audit responses, internal audit can move resources quickly to the areas of greatest risk and value. This approach preserves independence and rigor while reducing decision latency, allowing internal audit to remain relevant in moments that matter most.

1

Chapter #1

Key factors exacerbating internal audit’s role in auditing AI

Internal audit faces stakeholder demands, evolving AI regulations and a need for skilled talent.

AI broadly refers to machines that mimic humanlike cognitive abilities. This includes generative AI (GenAI), which creates content when prompted by a user, as well as nascent AI agents. Through its ease of use, GenAI has democratized AI, making the technology accessible to any user, whereas other types of AI have generally only been accessible to data scientists.

Against this background of dramatic change, with powerful tools in the hands of people who may not fully understand them, internal audit is confronting:

  • Increasing stakeholder demands for desired outcomes and risk mitigation. Institutional and activist investors — as well as consumers, employees and business partners — are asking more difficult questions around how companies are managing AI-related risks and issues.
  • Evolving global regulations focusing on companies’ use of AI. Jurisdictions and regulatory bodies around the world are developing guidance on the design, use and deployment of AI, including risk management.
  • Ad hoc and siloed approaches to managing AI risks and opportunities. AI issues span various functions within a company, and ownership of data, risks and controls may be unclear or unassigned. Integration of AI issues into existing governance and oversight models is limited, potentially resulting in unidentified gaps in risk coverage across the company.
  • Heightened demand for AI skill sets and upskilling talent. Organizations are increasing training and hiring new roles to address organizational ambitions and risk management activities, including oversight and governance of AI processes, risks and controls. Continuous learning and innovation are critical to keep pace with AI evolution.
2

Chapter #2

Top considerations for AI governance

CAEs must balance AI governance with managing risks and fostering a culture of awareness.

Effective AI governance starts with clarity: clarity on strategy, on risk appetite and on decision rights. In a fast moving AI landscape, governance must be designed to support execution at speed rather than slow it down. This means establishing governance structures that define accountability, escalation paths and oversight expectations in advance, so leaders are not debating roles or authority when risks materialize.

Rather than treating AI governance as a static framework, organizations should embed it into the operating rhythm of the business. This includes aligning AI policies, standards and controls with strategic objectives and confirming that governance activates dynamically in response to changing conditions. Internal audit can play a critical role by assessing whether these governance mechanisms are coherent, decision ready and aligned with how the business operates.


Along the first line of defense (LoD), operational teams within the risk-taking business units must be equipped with the tools and training to identify and manage AI risks. They should foster a culture of risk awareness and encourage proactive risk management practices. This includes owning the management of vendors that utilize AI and machine learning and performing contract reviews, for example, as well as managing data privacy considerations such as consumer notices and requests to opt out or delete information.

In the second line of defense, risk and compliance functions should define clear risk management policies and frameworks that align with the organization’s AI objectives. They should provide guidance and support to the first line in implementing risk controls and enable continuous improvement of risk management practices. These functions would fulfill their traditional remit in conducting model testing and performance assessment for AI, for instance, as well as assessing risks and establishing controls for data security, privacy and other key heightened risks for large language models.

3

Chapter #3

Internal audit’s role in responsible AI

There are three key areas: AI governance, auditing AI performance and enhancing enterprise IQ.

As the third line of defense, internal audit has an important role to fulfill for responsible AI (RAI), just as it would for any other technology that has tremendous upside potential alongside downside risk. They are responding in three key areas:

1. Gaining a seat at the table around AI governance. However, multiple seats at several tables are likely needed — depending on whether the AI governance structure is federated or decentralized, or whether it is still formative and hasn’t coalesced around a central team.

2. Auditing the performance of the AI framework and governance, as well as AI systems and products. This may involve early-stage work in preparation for broader rollouts or doing more compliance audits looking against a regulatory framework. Also auditing use cases themselves: the AI systems or solutions being used that may drift into risks over time or be a source of risk through improper ingestion of data.

3. Raising the enterprise IQ around responsible AI. Internal audit may sponsor governance committees and find other ways to share knowledge. Internal audit serves as custodians around the design of the control environment, making recommendations as warranted, harmonizing around the taxonomy/language that’s emerging in AI and making sure it’s understood in the business. That education comes about as internal audit audits different functions, processes and activities.

4

Chapter #4

Asking the right questions to assess AI maturity

Internal audit can enhance dialogue by exploring AI strategy, governance, risk management & metrics.

The abstract power of AI, and the extent to which every function in every industry can draw upon AI-powered use cases, makes just getting started a tricky endeavor without a one-size-fits-all approach. These questions can help internal audit start or further the dialogue.

Cae playbook rai development journey

Strategy

  • Is your company’s business model prepared for accelerating AI opportunities and risk mitigation?
  • Is your internal audit function operating as a risk traditionalist or a risk strategist?
  • Has your organization incorporated AI into strategic decision-making and business case and benefits analysis?
  • What is your organization’s internal and external AI communication strategy?
  • Does your organization have the right external alliances and partnerships to enable achieving its AI goals?
  • How does your organization define long-term value for AI?

Governance

  • What stage is your responsible AI program currently in? Are you in the early development phase, scaling up or optimizing for efficiency while aligning with emerging stakeholder needs?
  • Does your organization have a formal committee dedicated to AI governance?
  • What is management’s role in setting the AI strategy and in managing associated risks?
  • Does your organization have an AI risk policy?
  • How does your organization cascade AI throughout the three lines of defense (3LoD)?
  • Does your company clearly understand its priority AI issues across all stakeholders?

Risk management

  • Has your organization incorporated elevated AI risks into existing frameworks or taxonomies?
  • How does your organization provide program assurance for AI initiatives to confirm they deliver intended outcomes?
  • Has your organization assessed its processes and technology/tools and developed sufficient models to enable management of the AI lifecycle?
  • Does your organization embed risk and controls into the AI lifecycle?

Metrics and targets

  • What is the process to inventory, approve and track progress of AI use?
  • Has your organization defined specific metrics or targets to measure and monitor AI impacts?
  • How does your organization evaluate AI performance and create accountability for achieving targets?
  • Has your organization established reporting and communication channels for AI-related initiatives?

5

Chapter #5

Understanding AI entry points and controls for internal audit

Companies must address in-house, vendor and acquisition risks.



Internal audit must understand the vectors through which AI enters an organization and the controls suited for each of them. The chart above reflects a common lifecycle for responsible AI for solutions built in-house: for identifying and prioritizing use cases, building and testing them, and then completing the monitoring and controls to validate that they are working and have not been compromised. However, many companies are also:

  • Buying solutions outright, in which case the lifecycle somewhat resembles the chart above.
  • Encountering AI through third-party vendors — for instance, in software or tools leveraged by a vendor in the normal course of service delivery that has added AI capabilities. Third-party risk questionnaires are crucial here.
  • Making acquisitions, incorporating added due diligence to AI portfolios.

 Integrated responsible AI risk management and control environment consists of legacy control activities that need to be revisited and reassessed for readiness in functions like cyber, data privacy, third-party risk management, legal and compliance, as well as net-new control activities housed in those same functions, alongside. The model risk management controls the AI development and procurement lifecycles.

6

Chapter #6

Next steps: assessing AI readiness

CAEs should assess AI readiness, enhance team skills, and adopt effective audit strategies for AI.

Naturally, all organizations have varying starting points and established processes that they may be able to build on. CAEs should continuously ask where their company is on the responsible AI journey: starting a program, scaling its capabilities or optimizing it?

To stay ahead of whatever comes next — whether a technology to implement within internal audit or one to monitor in the business — CAEs should be aware of the steps being undertaken as organizations build or improve upon their RAI governance and risk management operating model and capabilities. It is up to them to see the full process from planning to reassessing, incorporating their knowledge of the organization and the players involved. With a greater understanding of risk and mitigation, coupled with supercharged technical capabilities, CAEs and other executives gain the confidence to stride into the future.

Here is how to answer two of the most important client questions when it comes to assessing AI readiness.

  • Yiming Chang and Vikas Bajwa, both senior managers in the Risk Consulting practice of Ernst & Young LLP, made key contributions to this report.

Summary 

Internal audit must navigate the complexities of AI by enhancing governance and risk management, agility and trust. Chief audit executives should create proactive audit plans, educate teams on AI risks and collaborate with leadership to promote responsible AI use while fostering innovation and maintaining compliance with evolving regulations in an unpredictable world.

About this article

Authors

Related articles

When the world shifts overnight, can you operate at the speed of trust?

Risk operating models must become strategy-first, trigger-based and governance-forward. Learn how Risk Strategists are leading the way.

How can reimagining risk prepare you for an unpredictable world?

The 2025 EY Global Risk Transformation Study explores how Risk Strategists see disruption earlier, adapt faster and respond with more precision.

Five steps internal audit can take to unlock strategic value

The risk landscape is changing. Here are five steps internal audit can take to unlock value and improve risk strategy.