EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
EY.ai, a platform that unifies human capabilities and artificial intelligence to help you confidently adopt AI. Learn more.
Read more
This attitude explains why CEOs are more determined than their direct reports to confirm that guardrails are in place before charging further ahead with AI. Conversely, it might also explain why 95% of enterprise AI pilots have produced no measurable P&L impact to date — because companies don’t feel safe enough to move the needle. The ability to evaluate and tolerate risk sits at the heart of every competitive endeavor, and so it’s fitting that Ernst & Young LLP (EY US) has chosen to test the hypothesis that safety equals speed by using its own governance frameworks to automate and transform risk assessment itself. A task that once took 50 hours now requires only six, but the implications extend much further than mere savings. Organizations able to trust — really trust — that their AI pilots won’t go awry will have a head start on reimagining not only risk assessment but also the enterprise itself.
“Don’t view AI as an efficiency gain — that’s table stakes,” says Sinclair Schuller, EY Americas Responsible AI Leader. “And it ends up being a race to the bottom because everyone’s pitching efficiency.” With its own guardrails up, EY US is already busy pursuing a different vision in which snapshots of vendors captured by experts after weeks of work evolve into a continuous process in which AI bears the mechanical burden while human judgement becomes more valuable, not less.
Continuously monitored risk
Prior to AI, third-party risk assessment consumed those 50 hours through painstaking reviews of contracts, SEC filings, liquidity risks, password policies, cybersecurity breaches and lawsuits, just to name a few. Assessors would plow through this documentation guided by more than 100 questions before assembling a final report and assigning a risk score — all while fighting fatigue and deadline pressure conspiring to compromise their thoroughness. Today, the EY tool ingests these documents more or less instantly, populating detailed responses with citations in seconds. Calling off this paper chase has only heightened the human assessors’ irreplaceable value in understanding a firm’s internal dynamics, grokking the implications of pending legislations or providing other intuitive insights that, as Schuller puts it, “only a person could know.”
The next step is asking AI to think for itself. “We’re moving to continuously monitored risk through autonomous agents,” Schuller explains, describing systems attached to multiple data sources —public feeds, private subscriptions and real-time market data — that wake software agents to reassess companies when conditions change. This isn’t a process made faster by AI, but one that can only be made capable by it. “Imagine if somebody had asked you to do this 10 years ago,” Schuller says. “How could you possibly monitor all these sources looking for kernels of evidence of a risk profile change for a single vendor? You couldn’t! You’d have to assign a thousand people.” And now, a thousand agents.