People riding bicycles on mountain against sky

AI risk management: establishing safe and effective deployment


Find out what CEOs can learn from mountain bikers when applying enterprise AI.


In brief
  • Protective gear empowers mountain bikers to ride confidently, just as operational risk management enables CEOs to embrace strategic risks in AI deployment.
  • Responsible AI frameworks can shift focus from compliance to efficiency, enabling organizations to pursue their goals with greater confidence.
  • Continuous monitoring of risks through AI can reveal hidden interdependencies in supply chains, improving overall risk management strategies.

Mountain bikers know that wearing protective gear doesn’t make you timid on the trail but rather frees you to ride hard and fast on even the trickiest terrain. Stopping to buckle your helmet and pads won’t slow you down, while giving you the confidence to tackle whatever lies ahead. It’s a quality they share with chief executives. It might sound counterintuitive, but reducing your operational risks is typically the safest path to taking even greater ones strategically. This thinking applies to enterprise artificial intelligence (AI) as much as it does to dirt jumping — it’s quicker and easier to deploy new tools and features with responsible AI frameworks in place, secure in the knowledge that any hallucinations won’t come back to haunt you.

This attitude explains why CEOs are more determined than their direct reports to confirm that guardrails are in place before charging further ahead with AI. Conversely, it might also explain why 95% of enterprise AI pilots have produced no measurable P&L impact to date — because companies don’t feel safe enough to move the needle. The ability to evaluate and tolerate risk sits at the heart of every competitive endeavor, and so it’s fitting that Ernst & Young LLP (EY US) has chosen to test the hypothesis that safety equals speed by using its own governance frameworks to automate and transform risk assessment itself. A task that once took 50 hours now requires only six, but the implications extend much further than mere savings. Organizations able to trust — really trust — that their AI pilots won’t go awry will have a head start on reimagining not only risk assessment but also the enterprise itself.

 

“Don’t view AI as an efficiency gain — that’s table stakes,” says Sinclair Schuller, EY Americas Responsible AI Leader. “And it ends up being a race to the bottom because everyone’s pitching efficiency.” With its own guardrails up, EY US is already busy pursuing a different vision in which snapshots of vendors captured by experts after weeks of work evolve into a continuous process in which AI bears the mechanical burden while human judgement becomes more valuable, not less.

 

Continuously monitored risk

 

Prior to AI, third-party risk assessment consumed those 50 hours through painstaking reviews of contracts, SEC filings, liquidity risks, password policies, cybersecurity breaches and lawsuits, just to name a few. Assessors would plow through this documentation guided by more than 100 questions before assembling a final report and assigning a risk score — all while fighting fatigue and deadline pressure conspiring to compromise their thoroughness. Today, the EY tool ingests these documents more or less instantly, populating detailed responses with citations in seconds. Calling off this paper chase has only heightened the human assessors’ irreplaceable value in understanding a firm’s internal dynamics, grokking the implications of pending legislations or providing other intuitive insights that, as Schuller puts it, “only a person could know.”

 

The next step is asking AI to think for itself. “We’re moving to continuously monitored risk through autonomous agents,” Schuller explains, describing systems attached to multiple data sources —public feeds, private subscriptions and real-time market data — that wake software agents to reassess companies when conditions change. This isn’t a process made faster by AI, but one that can only be made capable by it. “Imagine if somebody had asked you to do this 10 years ago,” Schuller says. “How could you possibly monitor all these sources looking for kernels of evidence of a risk profile change for a single vendor? You couldn’t! You’d have to assign a thousand people.” And now, a thousand agents.

Reimagining the business model

These systems won’t just monitor single firms but will also track previously hidden interdependencies woven throughout their supply chains, alerting human assessors to any material events potentially meriting escalation to clients. Built-in safeguards tamp down hallucinations through citation requirements and evaluating prompts before passing them along. This architectural approach makes trust systematic rather than aspirational.

For EY US, reinventing risk assessment also means reimagining the business model underpinning it, which by necessity will move from fee-for-service deliverables to subscription-based continuous intelligence. “Historically, there was a direct correlation between the number of deliverables we’d create for a client and the number of people needed to create that deliverable,” Schuller notes. No longer. This decoupling of work from personnel points toward a broader reinvention of risk assessment and the firms providing it, moving from selling time to peace of mind.

Summary 

EY US’s sprint toward the agentic future ahead of its rivals depends on making responsible AI the default path rather than an afterthought. This has involved creating reusable software components, such as hallucination guards, that can be embedded directly into AI workflows. “If you don’t modularize it, it becomes very hard to scale,” Schuller warns. These and other safeguards will help shift responsible AI from a compliance burden into operational infrastructure, enabling organizations to pursue their own ambitions with confidence —wearing their helmets as they race toward steeper, more rewarding terrain.

About this article

Related articles

Why agentic AI is a revolution stuck in an evolution

This AI survey shows how AI investments are turning into business productivity gains and significant financial performance.

How to maximize your AI investments now and in the future

Enhance AI capabilities with data infrastructure, ethical governance, and strategic talent management to drive enterprise success.

Six pillars for AI success: how the C-suite can drive results

Competitive advantage rests on these capabilities and functions. But overlooking just one creates an unstable foundation. Learn how to evolve quickly.