EY AI risk and governance framework

Building a risk framework for Agentic AI

Related topics

Agentic AI brings immense opportunity—but only those with a strong risk framework will be ready to harness it responsibly


In brief

  • Organizations must implement a robust risk management framework for Agentic AI to address its autonomous decision-making capabilities.
  • A multi-layered risk management framework will ensure transparency, accountability and human oversight in Agentic AI applications.
  • The EY Responsible AI framework offers a comprehensive approach, focusing on governance, model security and compliance with emerging regulations.

As organizations adopt Agentic AI—autonomous, real-time decision-making—the need for a robust risk management framework becomes vital. These systems respond dynamically to changing environments, making decisions and taking actions independently. While the potential is immense, so are the risks—ranging from unpredictable behavior to ethical breaches and compliance failures.

To manage these effectively, organizations must go beyond static safeguards and adopt dynamic oversight, continuous monitoring and adaptive governance. A comprehensive AI Risk Framework embeds long-term controls, enabling a culture of responsible AI use. It helps prevent unintended consequences—whether from system failures or human misuse—and is crucial in regulated, data-sensitive sectors like finance and healthcare.

Without this foundation, organizations risk not only operational disruption but also reputational and regulatory fallout.

Responsible AI framework from EY as a foundation

The EY Responsible AI framework helps organizations mitigate AI risks while complying with emerging regulations. It is built on seven key domains to establish robust governance processes aligned with industry-leading standards of Responsible AI:

  • Governance
  • Model design and development
  • Model security
  • Data management
  • Identity and access management
  • Business resiliency
  • Security operations

EY.ai podcast series

This podcast series aims to explore the fascinating world of Generative AI unplugged, its applications, and its impact on various industries. Join us as we dive deep into the realm of artificial intelligence and uncover the innovative techniques used to generate unique and creative outputs.

Know more

The foundational framework served well for traditional and Gen AI deployments. However, Agentic AI introduces new dimensions of risk due to its autonomous and evolving nature, necessitating an enhanced approach.
 

A multi-layered approach

Unlike Gen AI, which is more static and prompt-driven, the Agentic AI management framework should have a multi-layered approach that integrates technical, ethical, and procedural safeguards, built around a core of three guiding principles: transparency, accountability, and human oversight.
 

Agentic AI Risk Management Framework

Agentic AI risk management framework

The enhanced framework builds on the foundation laid by the EY AI risk and governance framework and is designed to be multi-dimensional, addressing risk across eight core domains, which create an integrated shield of controls and oversight.

Implementation roadmap for organizations

Implementing a risk framework for Agentic AI applications has four key pillars: Security foundation, governance and accountability framework, technical controls, and transparency and compliance.

Security foundation

The pillar aims at safeguarding AI systems from vulnerabilities and unauthorized access. This includes:

  • Implementing strict Role-Based Access Control (RBAC) / Attribute-Based Access Control (ABAC) control protocols with the principle of least privilege to limit access to sensitive AI functions.
  • Enforcing Multi-Factor Authentication (MFA) for all agent interactions, adding an extra layer of security.
  • Testing AI agents in a sandbox environment to identify and mitigate behavior risks.
  • Detecting and preventing unauthorized activities through command chain monitoring.

Governance and accountability framework

This pillar focuses on managing AI risks effectively, including:

  • Establishing a cross-functional governance team with diverse specializations.
  • Defining responsibility matrices for each agent's actions.
  • Maintaining human in the loop to ensure human oversight for high-stakes decisions.
  • Regularly auditing AI systems for fairness and enforce ethical guidelines to prevent discrimination.
  • Defining protocols in incident response plans to address breaches or system failures and minimize damage.

Technical controls

The pillar is responsible for monitoring, controlling and intervening during AI operations. This includes:

  • Implementing kill switches with graduated intervention options (e.g., warnings, restrictions, shutdowns) to halt AI operations in case of anomalies or risks.
  • Deploying multi-agent verification systems for interactions between multiple AI agents so that collaboration does not lead to unintended consequences.
  • Establishing real-time anomaly detection systems to flag deviations.
  • Deploying security event management (SEM) tools to detect anomalies in real-time and ensure that the AI system operates within predefined parameters.

Transparency and compliance

The pillar enables AI systems to be interpretable, auditable, and compliant with regulatory standards. For instance:

  • Developing AI modules capable of explaining their decision-making processes.
  • Creating immutable audit trails of all agent actions; facilitating forensic analysis in case of incidents.
  • Enabling adherence to leading compliance standards by aligning organizational policies with frameworks such as NIST AI RMF, ISO/IEC standards, and regulations like EU AI Act, GDPR and CCPA.

These pillars integrate all components to safeguard Agentic AI systems and ensure their responsible operation within the organization.

Risk Framework for Agentic AI

Securing the future: Navigating AI risks in an evolving digital world

Explore key AI security risks and how organizations can build resilient, ethical, and future-ready AI systems across industries.

Know more

The way ahead

Agentic AI is here—and it is moving fast. Organizations that thrive in this new era will not be the ones that react to risk, but those that build for it from the ground up. A robust, multi-layered risk framework is not just a protective measure; it is a strategic advantage. By embedding trust, transparency, and control into AI systems today, organizations can lead with confidence tomorrow—responsibly scaling the power of autonomous intelligence while staying firmly in command

Why organizations need distinct risk framework for Agentic AI

Listen to the episode to explore priorities for safe and responsible autonomous AI.

Summary

To mitigate Agentic AI risks, organizations should implement dynamic oversight and continuous monitoring by leveraging the EY Responsible AI framework. This framework emphasizes governance, model security, and regulatory compliance, ensuring responsible AI use. By adopting a multi-layered approach, organizations can embed transparency and accountability, safeguarding themselves against operational disruptions and reputational damage.


Related articles

India Corporate Treasury Survey 2025: How AI and automation are redefining treasury

Learn how AI, technology and automation are shaping the future of corporate treasury functions as revealed in the India Corporate Treasury Survey 2025.

(AI)deation to Impact: Architecting the AI-First Workforce

AI workforce transformation in India’s IT services and BPM industry is redefining roles, productivity, and talent models for an outcome-led future.

How Agentic AI can transform industries by 2028

Agentic AI: Transforming industries by 2028 with autonomous decision-making, improved efficiency, and ethical frameworks. Learn how it’s reshaping businesses.

    About this article