Blue ice cave in Baikal frozen lake in winter season, Siberia, Russia, Asia

What risk leaders need to do now about agentic AI

Agentic AI empowers risk management, but only leaders who adapt operating models, evolve the CRO role and upskill teams will unlock its value.


In brief

  • Agentic AI is poised to revolutionize risk management, offering unprecedented efficiency and strategic advantages for organizations willing to adapt.
  • To harness AI's full potential, risk leaders must rethink operating models and adopt new roles and processes that integrate human and machine collaboration.
  • Organizations that upskill and develop frameworks to embed AI into operations can accelerate progress, build trust and help ensure responsible AI use.

A Luxembourg perspective

AI integration success: moving beyond pilots and rethinking operating models

Agentic AI marks a step change for risk management by enabling autonomous, continuous risk identification and response, rather than periodic, manual oversight. The real challenge for risk leaders is not adopting the technology, but redesigning the risk function so AI amplifies human judgment, accountability and foresight rather than replacing them. Organizations that already operate as “Risk Strategists” (those with integrated, technology enabled models) are positioned to capture this value. They are also 48% more likely to reduce unexpected risks and 35% more likely to improve incident response times.

In Luxembourg, three market realities make this urgent.

  • DORA reinforces the need for structured oversight when outsourcing to third parties. Firms must have a robust framework to assess, monitor, and challenge how AI is used across their service provider chain, beyond contractual assurances. As AI agents become embedded in outsourced processes, third party risk oversight must extend to understanding AI behaviour, decisioning, and associated ICT risks end to end.
  • The EU AI Act adds risk based obligations for use cases such as credit scoring, AML, and investment models on top of DORA aligned ICT and outsourcing rules. Treating these as separate compliance tracks increases cost and weakens control. What’s needed is a single, integrated AI governance framework embedded in the three lines of defense.
  • The skills question is increasingly about oversight, not just engineering. As firms accelerate AI adoption, governance literacy must keep pace. The priority is ensuring the people who hold accountability can understand, challenge, and control AI systems operating across their organizations and service provider chains.

Where to start:

  • Embed AI governance and ethical guardrails into your existing operating model and don’t build a parallel structure.
  • Ensure the CRO provides oversight on AI-driven processes, including those operated by third parties.
  • Develop AI-literate risk teams capable of overseeing human-AI collaboration.

Organizations that act now will be able to use AI to build a genuinely stronger risk posture. Those that wait for regulation to fully crystallize will find themselves retrofitting governance into systems already in production.

 

Never miss a Luxembourg perspective with our monthly newsletter, summarizing short expert commentaries with a local flavor, covering a range of sector-spanning themes. Subscribe now.

The challenges faced by chief risk officers (CROs) have never been sharper. CROs are expected to harness artificial intelligence (AI) to capture risks more comprehensively, automate processes and boost efficiency, yet at the same time, to safeguard the human judgment that underpins sound decision-making.

As one CRO recently admitted, her excitement about AI’s potential was matched only by unease about its implications for her team: Would these tools elevate their expertise or quietly erode it?

Her dilemma captures the inflection point the profession now faces. The question is no longer if AI will transform risk management, but how to shape that transformation – preserving the judgment, accountability and foresight that define true risk leadership.

This tension lies at the center of every conversation we’re having with risk leaders. The real issue isn’t man versus machine, but how to redesign the risk function so AI amplifies human insight – reshaping team dynamics, decision-making and the skills that define the next generation of risk professionals.

Our view is clear: AI offers a once-in-a-generation opportunity to transform risk management. When embedded into redesigned processes, it can dramatically expand risk coverage, elevate the experience of managing risk and enable faster, better-informed decisions.

Risk Strategist mindset

The 2025 EY Global Risk Transformation Study identifies two archetypes: Risk Strategists and Risk Traditionalists. Risk Strategists are organizations that have embraced strategic, tech-enabled approaches to risk, making them 48% more likely to reduce unexpected risks and 35% more likely to improve incident response times.

EY Global Risk Transformation Study
48%
more likely to reduce unexpected risks.
EY Global Risk Transformation Study
35%
more likely to improve incident response times.

Agentic AI represents the next evolution of this mindset: it requires the cultural readiness, structural agility and innovation orientation that Risk Strategists have begun to cultivate. By leveraging AI, these organizations can strengthen their risk management capabilities – broadening coverage, improving data quality to generate faster insights, enabling better decision-making and ultimately building a more proactive risk posture.

In contrast, Risk Traditionalists – those still operating in siloed, compliance-driven models – will struggle to realize the benefits of agentic AI unless they transform their foundations.

As organizations navigate this transition, risk leaders must foster an environment that encourages experimentation and learning, improving both human and AI capabilities for optimal outcomes.

The latest EY/Institute of International Finance (IIF) global bank risk management survey highlights the challenges facing today’s CROs:

  • The risk landscape is broadening. Data risks (privacy, governance and control) and the use of AI within organizations are moving rapidly up CROs’ risk agendas.

  • 57% of banks recognize that increased AI adoption will be a key initiative to help manage this expanding risk profile.

  • 12% of respondents report not using AI at all, while most of those that do are applying it primarily to anomaly detection and automation of operational tasks, i.e. the lower end of the value spectrum.
EY/IIF global bank risk management survey
57%
of banks are prioritizing AI to manage emerging risks.

Barriers to adoption

As the EY/IIF survey shows, the path to adopting agentic AI isn’t without its challenges. Risk leaders face mounting pressures from various fronts – rising productivity demands, the need to grapple with geostrategic impacts, regulatory scrutiny and acute talent shortages. Although pilot programs indicate that AI can deliver productivity gains of up to tenfold, many organizations struggle with the necessary cultural and organizational shifts for successful adoption.

Agentic AI requires a transformation that goes beyond simply implementing generative AI (GenAI); it demands a rethinking of how risk professionals develop their skills and judgment. The EY risk transformation study shows only 32% of organizations globally qualify as Risk Strategists. The rest are held back by risk-averse cultures, siloed structures and an inability to quantify the ROI of advanced risk management – barriers that will inhibit agentic AI adoption.

Closing the skills and trust gap

Compounding these challenges is a significant skills gap in AI fluency. According to the Financial Services Skills Commission report1, 81% of firms experience a lack of specialist talent as a barrier to adopting AI. Organizations should ensure their workforce combines AI knowledge with deep business understanding and essential human skills such as judgment, adaptability and ethical decision-making. Embedding AI ethics into operational frameworks is non-negotiable for a profession built on safeguarding trust.

Financial Services Skills Commission report
81%
of firms lack AI talent.

External pressures

The external landscape is also rapidly evolving. Customers are deploying their own AI agents, regulators are pushing for real-time reporting and criminals are exploiting advanced technologies. In this AI arms race, agentic AI may serve as a crucial defense mechanism.

 

Rethinking the risk operating model

To fully leverage agentic AI, leaders must rethink the core operating model of the risk function, focusing on:

  • People: Developing new roles that foster human-AI collaboration and strengthening critical thinking and judgment skills that technology cannot replace.

  • Process: Designing workflows that support agent autonomy while maintaining essential human oversight.

  • Technology: Implementing robust infrastructure and tools, including AI ‘guardrails’ to promote safe agent behavior. Risk Strategists already lead in this area. They are significantly more likely to use advanced techniques such as horizon scanning (81% more likely), stress testing, Monte Carlo simulations and black swan analysis – methods that agentic AI can enhance and scale.

 

Rethinking roles and skills

As organizations adapt, new roles will emerge, including:

  • AI-augmented business relationship managers: Collaborate with AI copilots to analyze data and draft risk narratives.

  • AI orchestrators or ‘conductors’: Manage teams of digital risk agents, assigning tasks, setting performance goals and ensuring quality output.

  • AI training and governance specialists: Safeguard the accuracy, fairness and compliance of AI agent behavior.
     

Ultimately, human judgment will remain the final checkpoint for critical risk decisions, reinforcing the importance of a human-in-the-loop approach.

 

How to prepare for an agentic future: Next steps for risk leaders

  • Move from active experimentation to early adoption: Build out use cases and drive greater adoption of agentic AI from the top.

  • Design and develop operational frameworks: Implement robust governance and controls, integrating AI enablers and guardrails within which agents must operate.

  • Evolve career paths: Develop ‘citizen developers’ and ‘AI-savvy’ risk officers through targeted training and upskilling.

  • Rethink the org chart: Shift to smaller human teams overseeing more AI agents, creating new roles like Head of Automated Risk Operations.

  • Address the talent gap: As demand for AI-aware risk professionals outstrips supply, firms may face higher recruitment and retention costs, which some organizations are beginning to treat as a strategic risk at the board level.
     

Without a proactive plan to reskill teams and adapt operating models, risk functions could become outdated – eroding trust and leaving organizations vulnerable to emerging threats. However, those who act now can establish a new standard for risk management in the AI era.


These steps reflect the same mindset and organizational readiness that characterize Risk Strategists. Agentic AI builds on this foundation, offering the next stage of evolution for those ready to move from traditional models to intelligent, collaborative risk operations.


Summary

Agentic AI has the potential to transform risk management, delivering unprecedented efficiency and strategic insight. Capturing this value requires bold action: risk leaders must redesign their operating models, embrace new roles and processes and prioritize upskilling alongside AI ethics. Organizations that act decisively will build stakeholder trust while setting new industry standards. The window for competitive advantage is narrow; leaders who move now won't just adapt to the AI era – they'll define it.

Related articles

Eight ways banks can move AI from pilot to performance

Corporate and commercial banks can gain a first-mover advantage by scaling AI in areas of high potential. New report reveals eight drivers of success. Read more.

Matt Cox + 2

Five ways banking CROs are increasing agility

The EY/IIF bank risk management survey highlights the need for increased agility against diversifying risks. Find out more.

    About this article

    Authors