Business people with coffee and digital tablet talking on city bridge

AI-agents can make or break a financial institution


Agentic AI offers opportunities and risks. We discuss with Onsi an Stater how financial institutions can use AI-agents safely and effectively.


In brief

  • AI-agents provide efficiency, automation, and global reach, but carry operational and reputational risks.
  • Controlled deployment, governance, and AI literacy are essential for safe adoption.
  • We highlight practical applications and strategies for financial institutions.

AI agents are making their entrance into the financial sector. How are established players and new entrants dealing with both the opportunities and the risks? A conversation with Wynand Fourie of Onsi, Ruben Westerhof of Stater, and Remi Hesterman of EY.

AI is now far more than a useful tool for generating amusing images, summarizing documents, or managing emails. The next step is the rise of AI agents: systems that independently perform tasks, make decisions, and take over entire processes. This development is unfolding rapidly in the financial sector and, as with any new technology, brings both opportunities and challenges. That becomes clear in this round table: Getting Concrete with AI Agents.

Hesterman: “The opportunities and challenges differ by type of organisation. New players can build directly on cloud and AI, while established parties are often tied to legacy systems and existing governance. For them, the step toward so-called agentic AI is more complex.”

Fourie: “We see that in practice as well. Onsi is a small organisation with about forty people. Yet we actively offer our insurance platform in 37 countries. And frankly, without AI that would simply be impossible. With AI we can, for example, support customers in their own language. That isn’t a luxury but a basic requirement for operating globally with such a small team. And that’s one of the effects of AI: small organisations can enter large markets much more easily. It fundamentally changes the playing field.”

Westerhof: “For Stater the benefits are slightly different. We’ve been active for longer and take over the full administrative and technical management of mortgages for lenders so they can focus on their core activities. We therefore manage data and processes for multiple parties. AI mainly offers productivity gains – such as in document processing and auditing – and also improves service to customers.”

Even our CEO can now build a prototype.

Responsible use and clear boundaries

Can you be more concrete about where AI agents are being applied?

 

Hesterman: “That’s very broad, and at some clients I see dozens of use cases next to each other. Think of KYC processes, credit assessment, but also HR and communication. The technology is often not the problem; the challenge is organising adoption across the organisation and managing risks.”

 

Fourie: “Managing risks is a key theme for everyone, including startups. It’s no secret that an AI system can hallucinate and that users may try to manipulate an agent—e.g., in support chatbots. That’s one of our concrete applications, and we’ve deliberately built in a button that lets users switch back to a human employee at any time. That hybrid approach limits reputational risks. Other applications include the translation I mentioned. And I shouldn’t forget software development, because that is impressive: ten engineers are now completing sprints so quickly that we struggle to feed them enough work.”

 

Westerhof: “We roughly focus on two tracks. On the one hand we focus on improving our services; we call that Client AI. Think of optimising and further digitising processes such as data extraction and validation, document processing, and the interaction with all parties in the chain, such as consumers and intermediaries. On the other hand we focus on making the internal organisation more productive; we call that Servicing AI. We must always take customer requirements into account. Some customers require on-premise solutions, while the technology is often cloud-first; that makes it more challenging.”

 

So agentic AI is promising but also risky. How do you ensure responsible use? And what frameworks are needed?

 

Fourie: “We’ve learned that full automation can be risky. When our chatbot handled everything, we received complaints. That’s why, as I said, we now always offer a path back to a human to build in human oversight. We also learned that it’s wise to keep the task scope of individual agents small. Our chatbot is actually eight agents, each covering its own domain. That reduces risks.”

 

Hesterman: “Recognisable. The hype surrounding agents is big, but they’re often not yet enterprise-ready. The greatest value today lies in task-oriented, well-bounded agents. Successful use requires governance, monitoring, and clear rules.”

 

Westerhof: “I’ll add two things. First, an AI policy aligned with the European AI Act. That means we perform risk assessments on our AI use cases throughout the life cycle and set rules for prompting, data use, cloud use, and logging. That gives you more control.”

 

Hesterman: “Control is essential. AI agents can make or break you: make you because they offer many opportunities, break you because the risks are significant. Fully autonomous agents may sound appealing but are not yet mature enough for large-scale use. That’s why I recommend starting with low-risk tasks. Build experience and then scale at a responsible pace, in line with the rising maturity levels of critical organisational components such as risk governance, the operating model, AI literacy, and a flexible platform. That lets you move quickly without entering the ‘danger zone’, where experimentation happens but nothing is actually deployed.”

 

Fourie: “Acting too slowly is risky as well. If you don’t build fundamental AI capabilities now, you’ll fall behind the competition. You must move quickly, but in a controlled way.”

There are often dozens of use cases side by side.

That pace often depends not on technology but on people. Is that the case here too?

Westerhof: “Absolutely. We see AI as an opportunity to make work more enjoyable and smarter. At the same time you need to prepare employees. AI literacy is crucial: everyone must understand what AI can do and where its limits are. HR plays a key role through training and by attracting people with experience using AI tools. And good AI adoption also requires strong collaboration across teams and roles in the organisation.”

Fourie: “If you’re creative, you suddenly gain a completely new spectrum of possibilities that are no longer reserved for programmers. Even our CEO can build prototypes. In the past, you drew something on a whiteboard so engineers could start building; now you build it yourself in your own app. But the ‘low barrier to technology’ also has another side. Junior work is largely taken over by AI. You no longer need to spend days writing code to test something simple. Fewer juniors are needed. But to become a senior, you first need to do junior work.”

Hesterman: “That theme is widespread, also in sectors like law and consulting. It will be interesting to see how that develops.”

What role does leadership play in successful adoption?

Westerhof: “At first we had a set of isolated initiatives. We then conducted an AI maturity assessment to get a better, more integrated view of our organisation’s maturity in AI adoption. The board was and remains actively involved, and this resulted in an integrated AI strategy with broad support. That alignment is crucial: top-down backing and an integrated approach in which everyone takes their role and contribution helps the organisation move forward faster.”

Hesterman: “Leaders must balance innovation and control. AI evolves at incredible speed. Those who fail to stay flexible get stuck. Those who’re too cautious fall behind. The winners are the ones who combine speed, governance, and adoption effectively.”
 

This article is from the Eye on Finance magazine. Download the PDF (Dutch) here for more insights on Agentic AI and the financial sector, or explore the other articles below.

Participants

Wynand Fourie

 Wynand Fourie  
   Onsi


Ruben Westerhof

 Ruben Westerhof 
Stater


Remi Hesterman

 Remi Hesterman  
EY




Summary

AI-agents are transforming the financial sector, offering efficiency, automation, and global scalability, but they also introduce significant risks. Onsi, Stater, and EY discuss real-world applications, including customer service, document processing, and internal workflows. The experts emphasize governance, monitoring, and controlled deployment, balancing innovation with safety. Leaders must understand AI capabilities and limitations to implement these agents successfully. The round table highlights that AI literacy, clear policies, and gradual scaling are critical for maximizing benefits while minimizing potential harm, ensuring institutions remain competitive and responsible.


In this edition

Pinar Abay (ING) on the impact of Agentic AI on retail banking

How ING uses agentic AI to accelerate retail banking, transform processes, and scale safely toward market leadership.

How NN is redesigning customer processes with Agentic AI

How NN uses AI-Agents to transform processes, exceed customer expectations, and engage employees in a safe digital transformation.


    About this article

    Authors