EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
Discover how EY can help the banking & capital markets, insurance, wealth & asset management and private equity sectors tackle the challenges of risk management.
Read more
External pressures
The external landscape is also rapidly evolving. Customers are deploying their own AI agents, regulators are pushing for real-time reporting and criminals are exploiting advanced technologies. In this AI arms race, agentic AI may serve as a crucial defense mechanism.
Rethinking the risk operating model
To fully leverage agentic AI, leaders must rethink the core operating model of the risk function, focusing on:
- People: Developing new roles that foster human-AI collaboration and strengthening critical thinking and judgment skills that technology cannot replace.
- Process: Designing workflows that support agent autonomy while maintaining essential human oversight.
- Technology: Implementing robust infrastructure and tools, including AI ‘guardrails’ to promote safe agent behavior. Risk Strategists already lead in this area. They are significantly more likely to use advanced techniques such as horizon scanning (81% more likely), stress testing, Monte Carlo simulations and black swan analysis – methods that agentic AI can enhance and scale.
Rethinking roles and skills
As organizations adapt, new roles will emerge, including:
- AI-augmented business relationship managers: Collaborate with AI copilots to analyze data and draft risk narratives.
- AI orchestrators or ‘conductors’: Manage teams of digital risk agents, assigning tasks, setting performance goals and ensuring quality output.
- AI training and governance specialists: Safeguard the accuracy, fairness and compliance of AI agent behavior.
Ultimately, human judgment will remain the final checkpoint for critical risk decisions, reinforcing the importance of a human-in-the-loop approach.
How to prepare for an agentic future: Next steps for risk leaders
- Move from active experimentation to early adoption: Build out use cases and drive greater adoption of agentic AI from the top.
- Design and develop operational frameworks: Implement robust governance and controls, integrating AI enablers and guardrails within which agents must operate.
- Evolve career paths: Develop ‘citizen developers’ and ‘AI-savvy’ risk officers through targeted training and upskilling.
- Rethink the org chart: Shift to smaller human teams overseeing more AI agents, creating new roles like Head of Automated Risk Operations.
- Address the talent gap: As demand for AI-aware risk professionals outstrips supply, firms may face higher recruitment and retention costs, which some organizations are beginning to treat as a strategic risk at the board level.
Without a proactive plan to reskill teams and adapt operating models, risk functions could become outdated – eroding trust and leaving organizations vulnerable to emerging threats. However, those who act now can establish a new standard for risk management in the AI era.
These steps reflect the same mindset and organizational readiness that characterize Risk Strategists. Agentic AI builds on this foundation, offering the next stage of evolution for those ready to move from traditional models to intelligent, collaborative risk operations.