Blue ice cave in Baikal frozen lake in winter season, Siberia, Russia, Asia

What risk leaders need to do now about agentic AI

Agentic AI empowers risk management, but only leaders who adapt operating models, evolve the CRO role and upskill teams will unlock its value.


In brief

  • Agentic AI is poised to revolutionize risk management, offering unprecedented efficiency and strategic advantages for organizations willing to adapt.
  • To harness AI's full potential, risk leaders must rethink operating models and adopt new roles and processes that integrate human and machine collaboration.
  • Organizations that upskill and develop frameworks to embed AI into operations can accelerate progress, build trust and help ensure responsible AI use.

The challenges faced by chief risk officers (CROs) have never been sharper. CROs are expected to harness artificial intelligence (AI) to capture risks more comprehensively, automate processes and boost efficiency, yet at the same time, to safeguard the human judgment that underpins sound decision-making.

As one CRO recently admitted, her excitement about AI’s potential was matched only by unease about its implications for her team: Would these tools elevate their expertise or quietly erode it?

Her dilemma captures the inflection point the profession now faces. The question is no longer if AI will transform risk management, but how to shape that transformation – preserving the judgment, accountability and foresight that define true risk leadership.

This tension lies at the center of every conversation we’re having with risk leaders. The real issue isn’t man versus machine, but how to redesign the risk function so AI amplifies human insight – reshaping team dynamics, decision-making and the skills that define the next generation of risk professionals.

Our view is clear: AI offers a once-in-a-generation opportunity to transform risk management. When embedded into redesigned processes, it can dramatically expand risk coverage, elevate the experience of managing risk and enable faster, better-informed decisions.

Risk Strategist mindset

The 2025 EY Global Risk Transformation Study identifies two archetypes: Risk Strategists and Risk Traditionalists. Risk Strategists are organizations that have embraced strategic, tech-enabled approaches to risk, making them 48% more likely to reduce unexpected risks and 35% more likely to improve incident response times.

EY Global Risk Transformation Study
more likely to reduce unexpected risks.
EY Global Risk Transformation Study
more likely to improve incident response times.

Agentic AI represents the next evolution of this mindset: it requires the cultural readiness, structural agility and innovation orientation that Risk Strategists have begun to cultivate. By leveraging AI, these organizations can strengthen their risk management capabilities – broadening coverage, improving data quality to generate faster insights, enabling better decision-making and ultimately building a more proactive risk posture.

In contrast, Risk Traditionalists – those still operating in siloed, compliance-driven models – will struggle to realize the benefits of agentic AI unless they transform their foundations.

As organizations navigate this transition, risk leaders must foster an environment that encourages experimentation and learning, improving both human and AI capabilities for optimal outcomes.

The latest EY/Institute of International Finance (IIF) global bank risk management survey highlights the challenges facing today’s CROs:

  • The risk landscape is broadening. Data risks (privacy, governance and control) and the use of AI within organizations are moving rapidly up CROs’ risk agendas.

  • 57% of banks recognize that increased AI adoption will be a key initiative to help manage this expanding risk profile.

  • 12% of respondents report not using AI at all, while most of those that do are applying it primarily to anomaly detection and automation of operational tasks, i.e. the lower end of the value spectrum.
EY/IIF global bank risk management survey
of banks are prioritizing AI to manage emerging risks.

Barriers to adoption

As the EY/IIF survey shows, the path to adopting agentic AI isn’t without its challenges. Risk leaders face mounting pressures from various fronts – rising productivity demands, the need to grapple with geostrategic impacts, regulatory scrutiny and acute talent shortages. Although pilot programs indicate that AI can deliver productivity gains of up to tenfold, many organizations struggle with the necessary cultural and organizational shifts for successful adoption.

Agentic AI requires a transformation that goes beyond simply implementing generative AI (GenAI); it demands a rethinking of how risk professionals develop their skills and judgment. The EY risk transformation study shows only 32% of organizations globally qualify as Risk Strategists. The rest are held back by risk-averse cultures, siloed structures and an inability to quantify the ROI of advanced risk management – barriers that will inhibit agentic AI adoption.

Closing the skills and trust gap

Compounding these challenges is a significant skills gap in AI fluency. According to the Financial Services Skills Commission report1, 81% of firms experience a lack of specialist talent as a barrier to adopting AI. Organizations should ensure their workforce combines AI knowledge with deep business understanding and essential human skills such as judgment, adaptability and ethical decision-making. Embedding AI ethics into operational frameworks is non-negotiable for a profession built on safeguarding trust.

Financial Services Skills Commission report
of firms lack AI talent.

External pressures

The external landscape is also rapidly evolving. Customers are deploying their own AI agents, regulators are pushing for real-time reporting and criminals are exploiting advanced technologies. In this AI arms race, agentic AI may serve as a crucial defense mechanism.

 

Rethinking the risk operating model

To fully leverage agentic AI, leaders must rethink the core operating model of the risk function, focusing on:

  • People: Developing new roles that foster human-AI collaboration and strengthening critical thinking and judgment skills that technology cannot replace.

  • Process: Designing workflows that support agent autonomy while maintaining essential human oversight.

  • Technology: Implementing robust infrastructure and tools, including AI ‘guardrails’ to promote safe agent behavior. Risk Strategists already lead in this area. They are significantly more likely to use advanced techniques such as horizon scanning (81% more likely), stress testing, Monte Carlo simulations and black swan analysis – methods that agentic AI can enhance and scale.

 

Rethinking roles and skills

As organizations adapt, new roles will emerge, including:

  • AI-augmented business relationship managers: Collaborate with AI copilots to analyze data and draft risk narratives.

  • AI orchestrators or ‘conductors’: Manage teams of digital risk agents, assigning tasks, setting performance goals and ensuring quality output.

  • AI training and governance specialists: Safeguard the accuracy, fairness and compliance of AI agent behavior.
     

Ultimately, human judgment will remain the final checkpoint for critical risk decisions, reinforcing the importance of a human-in-the-loop approach.

 

How to prepare for an agentic future: Next steps for risk leaders

  • Move from active experimentation to early adoption: Build out use cases and drive greater adoption of agentic AI from the top.

  • Design and develop operational frameworks: Implement robust governance and controls, integrating AI enablers and guardrails within which agents must operate.

  • Evolve career paths: Develop ‘citizen developers’ and ‘AI-savvy’ risk officers through targeted training and upskilling.

  • Rethink the org chart: Shift to smaller human teams overseeing more AI agents, creating new roles like Head of Automated Risk Operations.

  • Address the talent gap: As demand for AI-aware risk professionals outstrips supply, firms may face higher recruitment and retention costs, which some organizations are beginning to treat as a strategic risk at the board level.
     

Without a proactive plan to reskill teams and adapt operating models, risk functions could become outdated – eroding trust and leaving organizations vulnerable to emerging threats. However, those who act now can establish a new standard for risk management in the AI era.


These steps reflect the same mindset and organizational readiness that characterize Risk Strategists. Agentic AI builds on this foundation, offering the next stage of evolution for those ready to move from traditional models to intelligent, collaborative risk operations.


Summary

Agentic AI has the potential to transform risk management, delivering unprecedented efficiency and strategic insight. Capturing this value requires bold action: risk leaders must redesign their operating models, embrace new roles and processes and prioritize upskilling alongside AI ethics. Organizations that act decisively will build stakeholder trust while setting new industry standards. The window for competitive advantage is narrow; leaders who move now won't just adapt to the AI era – they'll define it.

Related articles

How AI simulation accelerates growth in wealth and asset management

Harness the power of AI simulation to anticipate client behavior and make faster, smarter decisions. Learn more.

Eight ways banks can move AI from pilot to performance

Corporate and commercial banks can gain a first-mover advantage by scaling AI in areas of high potential. New report reveals eight drivers of success. Read more.

Matt Cox + 2

Five ways banking CROs are increasing agility

The EY/IIF bank risk management survey highlights the need for increased agility against diversifying risks. Find out more.

    About this article

    Authors