Futuristic data streams

The rise of agentic AI: transforming fraud risk management

Contributors:
Wassim Azzoug, Senior Consultant, Financial Crimes
Florence Laisné, Consultant, Financial Crimes
Shanna Auger-Drolet, Consultant, Financial Crimes

Agentic AI has the potential to transform fraud risk management with real-time monitoring and adaptive learning, enhancing detection and prevention strategies.


In brief

  • Agentic AI revolutionizes fraud risk management by enabling real-time monitoring and adaptive learning, allowing organizations to respond proactively to evolving fraud tactics.
  • Unlike traditional AI, which relies on historical data, agentic AI autonomously analyses transactions and refines detection methods, significantly reducing false positives and enhancing accuracy.
  • The integration of agentic AI into fraud prevention strategies empowers organizations to automate responses, improve operational efficiency, and safeguard their assets while maintaining customer trust.

The rise of agentic AI: transforming fraud risk management

Agentic AI refers to systems capable of acting independently, exhibiting a level of agency that allows them to navigate complex situations without human intervention. These systems use advanced algorithms in machine learning, natural language processing and real-time data analysis to identify patterns, make predictions and execute decisions.

Unlike traditional AI, which relies on static algorithms and historical data patterns to identify fraud, agentic AI employs dynamic learning techniques that enable it to adapt to evolving fraud tactics in real time, allowing for more proactive and effective detection of new and sophisticated fraudulent schemes.

Generative AI is designed to create original content such as text, images, videos, audio or software code in response to user requests. It relies on machine learning models, particularly deep learning models, that simulate the learning and decision-making processes of the human brain. Key features of generative AI include:

  • Content creation
  • Data analysis
  • Adaptability
  • Personalization

Agentic AI, on the other hand, is designed to make decisions autonomously and act with minimal human supervision. It combines the flexible characteristics of large language models (LLMs) with the precision of traditional programming. Key features of agentic AI include:

  • Decision-making
  • Problem-solving
  • Autonomy
  • Interactivity
  • Planning

Comparing agentic AI and gen AI in fraud risk management

Example: autonomous transaction monitoring and pattern recognition


Gen AI reviews historical transaction data to generate detailed reports on trends and anomalies, such as identifying that a specific type of transaction is frequently linked to fraud. For example, it may reveal that new accounts making large purchases are often fraudulent. While it cannot act in real time, these insights enable compliance teams to proactively adjust their monitoring strategies and implement targeted measures to mitigate risks before they escalate.
 

Agentic AI autonomously monitors transactions in real time, analyzing them against risk thresholds. In contrast to rule-based systems that create many false positives, agentic AI learns from feedback to refine detection methods. By automating monitoring, it enhances accuracy and efficiency, reducing the resource drain on compliance teams. For instance, if a customer who usually makes small local transactions suddenly attempts a large international transfer, the system immediately flags it as suspicious and generates an alert. It learns from feedback to reduce false positives and adapt to new fraud patterns, enhancing accuracy and efficiency in transaction monitoring. 

Agentic AI provides real-time monitoring and adaptive learning, while generative AI offers valuable insights that help refine fraud prevention strategies.

How agentic AI will revolutionize fraud

Agentic AI in fraud

Hyper-personalized social engineering

Agentic AI allows attackers to craft and deliver deeply personalized scams at scale — adapting messages and tone based on real-time victim behaviour. What used to take days now takes minutes, with far greater emotional precision.

The AI agent’s value chain of attack:

  • Gathers data from public sources and breaches.
  • Creates tailored calling scripts or emails.
  • Places calls using realistic tone and voice.
  • Adapts strategy based on victim hesitation.

Autonomous targeting and lateral movement

Fraudsters will be able to deploy autonomous agents that behave like human insiders — infiltrating environments, navigating systems and exfiltrating sensitive data while minimizing detection risk.

The AI agent’s value chain of attack:

  • Gains initial entry through automated reconnaissance followed by targeted exploitation or social engineering.
  • Maps systems and finds vulnerabilities.
  • Blends in with legitimate behaviour.
  • Removes data in stealthy, multi-stage patterns.

Creation of synthetic identities and documents

With agentic AI, fraud will become productized — attackers will be able to create agents that generate fake documents, bypass controls, and apply for loans or services on behalf of synthetic identities, all with minimal manual effort.

The AI agent’s value chain of attack:

  • Combines breached data, fake Social Insurance Number/Social Security Number to create synthetic identities.
  • Creates realistic forged paperwork.
  • Submits requests while evading detection.
  • Refines techniques using feedback loops.

How can organizations employ agentic AI?

With agentic AI, organizations will be able to significantly enhance their fraud prevention strategies, benefiting from real-time data analysis, improved accuracy in identifying suspicious activities, reduced operational costs through automation and the ability to adapt quickly to evolving fraud tactics, thereby safeguarding their assets and reputation.

EY employ Agentic AI

Autonomous fraud response

Instead of long investigation cycles, organizations will be able to empower AI agents to detect, engage, verify and respond — all in a few seconds, without losing the customer’s trust.

The AI agent’s value chain of defence:

  1. Detects the suspicious event.
  2. Manages the alert by either escalating the case or resolving it independently.
  3. Contacts the customer with personalized messaging and verifies the legitimacy.
  4. Acts on the case (approve/block/escalate).

Control optimization and strategy tuning

Agentic AI will be able to analyze fraud prevention strategies continuously — fine-tuning thresholds, rule logic and model outputs to achieve a balance between risk reduction and customer experience.

The AI agent’s value chain of defence:

  1. Measures control effectiveness.
  2. Models edge cases and emerging threats.
  3. Suggests rule/model/control updates.
  4. Orchestrates controlled rollout.

Real-time behavioural risk assessment

Agentic AI will be able to continuously evaluate user behaviour across touchpoints — not just flagging anomalies but interpreting them in context to stop fraud before it occurs.

The AI agent’s value chain of defence:

  1. Monitors for risky behaviour patterns.
  2. Compares against historical trends.
  3. Triggers alerts, blocks or asks for step-up authentication.
  4. Refines logic using incident outcomes.

Automated red teaming for fraud scenarios

Defenders will be able to simulate fraud attacks using agentic AI — probing for vulnerabilities, testing detection thresholds and identifying control weaknesses like an attacker would.

The AI agent’s value chain of defence:

  1. Crafts fraud test case.
  2. Performs the simulated attack.
  3. Evaluates how far it gets.
  4. Recommends control improvements

Key considerations

EY Agentic AI key considerations

Explainability and regulatory accountability

Agentic AI systems may operate in ways that are nonlinear and opaque, particularly when drawing conclusions from unstructured or ambiguous data. This poses challenges in highly regulated sectors such as financial services, where decisions must be:

  • Explainable to internal stakeholders, clients and regulators
  • Traceable through logs and decision histories
  • Defensible in the event of disputes, regulatory scrutiny or litigation

Emerging legislation, including the EU AI Act and Canada’s Bill C-27, makes explainability not only a best practice but a legal requirement. Organizations must ensure that agentic systems can justify their actions in a manner that is intelligible and auditable.

Data access, privacy and governance

Agentic AI will require access to a wide array of internal systems — from transaction history and customer data to authentication and communication logs. Without strict governance, this level of access introduces substantial privacy and compliance risk.

To manage, this organizations should:

  • Implement principle-of-least-privilege access controls
  • Monitor agent queries and decisions for policy compliance
  • Enforce robust data minimization and retention practices

It's essential to consider agentic AI alongside gen AI, since both technologies present similar considerations. Additionally, organizations must prepare for potential insider misuse, where agents may be exploited for surveillance or unauthorized data aggregation.

Security and attack surface expansion

Agentic AI will soon become part of the organization’s digital attack surface. These systems may be vulnerable to:

  • Undesired outcomes
  • Adversarial examples designed to deceive the agent into misclassification
  • Prompt injection, where adversaries manipulate the agent’s input to or action
  • Exploitation of APIs or execution logic, particularly in agents connected to other systems

As such, agentic AI must undergo rigorous security testing, including red teaming and adversarial simulations. To prevent compromise or misuse, it’s vital to isolate agent actions, sanitize inputs and establish clear privilege boundaries.

Autonomy boundaries and ethical escalation

Organizations need to define the boundaries of an agent’s autonomy. Not all decisions should be made without human oversight, particularly in scenarios involving vulnerable customers, such as elderly clients or victims of social engineering, high-risk investigations or high-stakes financial outcomes.

Key questions include:

  • Under what circumstances must the agent defer to a human operator?
  • Which types of fraud cases should always trigger ethical review or escalation?
  • How should the agent respond when uncertainty or emotional harm is involved?

Embed ethical guardrails and escalation protocols into your organization’s system design so AI agents’ behaviour remains proportionate and contextually appropriate.

Conclusion – agentic AI and fraud: a doubled-edged revolution

A paradigm shift in both attack and defence.

Agentic AI will redefine how fraud is committed and how it’s mitigated. It will enable fraudsters to operate at an industrial scale, automating manipulation, impersonation and adaptation in real time. But it will also equip organizations with powerful tools for prevention, detection and response — in ways never before possible.

The human factor remains critical

Agentic AI will make systems smarter, faster and more efficient — but it won’t replace human insight. Organizations must ensure that human oversight, ethics and empathy are built into every layer of AI deployment.

This includes:

  • Establishing robust AI governance and control frameworks.
  • Building cross-functional teams of fraud experts, AI engineers and risk officers.
  • Preserving explainability and accountability at every decision point.

Summary

Agentic AI will revolutionize the pace, complexity and reach of fraud.

It will shape the next generation of both attackers and defenders — with higher stakes, greater automation and more blurred lines between machine and human behaviour.

To navigate this shift, your organization must not only invest in AI — you must invest in the resilience, integrity and agility of AI solutions and consider the people who use it.


About this article