EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can Help
-
Discover how Responsible AI helps organizations ensure fairness, reliability, and privacy while unlocking the full potential of AI.
Read more -
Experience AI innovation at the EY.ai Lab! Collaborate with multidisciplinary teams to develop tailored solutions that address your unique business needs.
Read more
Responsible use and clear boundaries
Can you be more concrete about where AI agents are being applied?
Hesterman: “That’s very broad, and at some clients I see dozens of use cases next to each other. Think of KYC processes, credit assessment, but also HR and communication. The technology is often not the problem; the challenge is organising adoption across the organisation and managing risks.”
Fourie: “Managing risks is a key theme for everyone, including startups. It’s no secret that an AI system can hallucinate and that users may try to manipulate an agent—e.g., in support chatbots. That’s one of our concrete applications, and we’ve deliberately built in a button that lets users switch back to a human employee at any time. That hybrid approach limits reputational risks. Other applications include the translation I mentioned. And I shouldn’t forget software development, because that is impressive: ten engineers are now completing sprints so quickly that we struggle to feed them enough work.”
Westerhof: “We roughly focus on two tracks. On the one hand we focus on improving our services; we call that Client AI. Think of optimising and further digitising processes such as data extraction and validation, document processing, and the interaction with all parties in the chain, such as consumers and intermediaries. On the other hand we focus on making the internal organisation more productive; we call that Servicing AI. We must always take customer requirements into account. Some customers require on-premise solutions, while the technology is often cloud-first; that makes it more challenging.”
So agentic AI is promising but also risky. How do you ensure responsible use? And what frameworks are needed?
Fourie: “We’ve learned that full automation can be risky. When our chatbot handled everything, we received complaints. That’s why, as I said, we now always offer a path back to a human to build in human oversight. We also learned that it’s wise to keep the task scope of individual agents small. Our chatbot is actually eight agents, each covering its own domain. That reduces risks.”
Hesterman: “Recognisable. The hype surrounding agents is big, but they’re often not yet enterprise-ready. The greatest value today lies in task-oriented, well-bounded agents. Successful use requires governance, monitoring, and clear rules.”
Westerhof: “I’ll add two things. First, an AI policy aligned with the European AI Act. That means we perform risk assessments on our AI use cases throughout the life cycle and set rules for prompting, data use, cloud use, and logging. That gives you more control.”
Hesterman: “Control is essential. AI agents can make or break you: make you because they offer many opportunities, break you because the risks are significant. Fully autonomous agents may sound appealing but are not yet mature enough for large-scale use. That’s why I recommend starting with low-risk tasks. Build experience and then scale at a responsible pace, in line with the rising maturity levels of critical organisational components such as risk governance, the operating model, AI literacy, and a flexible platform. That lets you move quickly without entering the ‘danger zone’, where experimentation happens but nothing is actually deployed.”
Fourie: “Acting too slowly is risky as well. If you don’t build fundamental AI capabilities now, you’ll fall behind the competition. You must move quickly, but in a controlled way.”