Succeeding where RPA didn’t
Agentic AI delivers on the unmet promise of robotic process automation (RPA). That technology emerged over a decade ago to automate routine office tasks that require humans, such as copying data between applications and performing data quality checks.
Automating workflows yielded only marginal results because workflows didn’t change. RPA lacked a critical element that agentic AI delivers: the ability to make decisions.
“RPA promised that automation would drive out cost,” says Traci Gusher, EY Americas AI and Data Leader. “It was effective when processes were well defined, but the results in many cases were only incremental improvements and not big process efficiencies. The cost benefits just didn’t meet the promise.”
In contrast, AI agents understand context, interpret natural language, incorporate multiple data types and reason about outcomes. This enables them to handle more complex and ambiguous tasks. Significantly, agents can understand and improve upon existing workflows, making them a strategic partner in business transformation.
AI-led business redesign
Agents can plan multi-step actions, adjust workflows dynamically and even delegate tasks to other agents or humans.
Because they are based on machine learning algorithms, their performance can improve over time based on feedback or outcomes. They can work across applications, systems and data sources to orchestrate full business processes rather than just task fragments.
These factors give agents enormous potential to catalyze change, but only if organizations understand how to put their unique attributes to use.
Organizations must involve agents in process design and automation to leverage their attributes effectively. However, this goes against human nature since experience has taught us to regard computers as essentially high-speed idiots, not trusted colleagues.
Gusher says the new approach goes beyond orchestrated workflows to become an “AI-led” strategy.
“If you forget the existing process and instead focus on the inputs needed and the results you want and create a net new AI-first process, you can see 90%+ improvement,” she says.
For example, EY Consulting trimmed the length of one internal process from 44 to 36 hours by focusing on integrating AI into the existing process. But, when the team discarded the current process entirely and reinvented it from scratch, the process time fell to 45 minutes.
Agents are already delivering value
Although agents are relatively new, compelling use cases are emerging. As an example, compliance tracking and reporting is manually intensive, but because the job requires contextual knowledge, humans typically do it.
AI agents show promise as human surrogates for this kind of work. They can pull data from multiple databases and applications, normalize it to align with regulatory requirements, identify and even repair issues with data quality, generate reports, and trigger remediation workflows.
Generative AI can summarize the often-voluminous books of rules that must be consumed.
Agents are fulfilling a long-unrealized dream of remaking customer service operations. Contact centers are stressful environments requiring human operators to quickly access data from multiple sources. Burnout and turnover are constant problems.
Agents can understand and respond to common questions quickly. They can be trained to take remedial action, such as issuing credits or scheduling service calls, allowing human operators to focus on the most pressing issues.
Automation isn’t perceived as negative; in fact, customer satisfaction improves. Gartner has reported that nearly three-quarters of customers who use self-service channels intend to do so again.1
New technology, new risks
For all their potential, agents carry some risks. Concerns about trust and security are the most common. The potential downsides are amplified when agents are empowered to make decisions autonomously.
Working with agentic AI requires a mindset change. Business computer systems have traditionally been deterministic, meaning they were programmed with a set of instructions and expected to carry them out repeatedly and reliably.
AI is non-deterministic. Models change their behavior based on environmental variables, even from minute to minute. This learning process means a chatbot or agent may deliver different answers to the same question.
It’s difficult to predict what will influence a model’s behavior. Introducing even a small amount of new training data can sometimes have dramatic results. People have learned to tolerate the occasional GenAI “hallucination,” but inappropriate behavior in a customer-facing scenario or business-critical process can be dangerous.
That means agents must be tested frequently to make sure that the model hasn’t “drifted” or degraded over time. Testing should also look for the possibility that training data was introduced that could corrupt model performance or ethics of the agents, either intentionally or by mistake.
Testing agentic AI systems isn’t like testing traditional software. “Historically, test cases have focused on desired outcomes,” Gusher says. “Agentic systems don’t necessarily take a step-by-step process to reach an outcome, so the way you test has to be expanded.”
A new kind of testing
One effective strategy is to employ “red team” testing, a cybersecurity tactic in which ethical hackers simulate real-world attacks to expose vulnerabilities. Red team testing goes beyond probing for security flaws to identify logic breakdowns, misaligned goals, unintended behaviors and model hallucinations in dynamic contexts.
“It’s fundamentally different from how we have traditionally tested systems,” Gusher says. “It’s similar to penetration testing, but it goes far beyond access to look at factors that influence a model’s behavior, making it do things it’s not supposed to do.”
That requires skills that many IT organizations may not have in house. Developing them or engaging an outside firm for AI testing is critical before moving agents into production and then continually monitored.
Be careful not to fall into the RPA trap. Small, incremental gains are good, but the biggest payoff comes from diving deep into a few areas and finding ways to reinvent them for sustainable advantage.
“Organizations should look at their opportunities, risks and challenges and think about where they can go deep instead of automating a thousand different pieces of processes,” she says. “That’s how you’re going to get to transformative change.”