EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
Why organizations need distinct risk framework for Agentic AI
EY.AI podcast explores why Agentic AI needs a new risk framework, highlighting vulnerabilities and key priorities: observability, testing and human oversight.
In the latest episode of the EY.AI podcast series, part of EY India Insights, Abbas Godhrawala, Partner, Risk Consulting, EY India, discusses why Agentic AI (systems that act autonomously and can make real-time decisions) requires a new and more robust risk framework that is different from the frameworks for traditional AI.
He highlights that since Agentic AI can take actions independently, adapt to changing environments and interact across multiple systems, that also increases vulnerability to adversarial threats, unpredictable behavior, cascading system interaction and reduced human oversight. Abbas suggests three foundational priorities for enterprises to adopt from the EY Agentic AI risk framework: observability, testing frameworks and human oversight.
Key takeaways:
Agentic AI systems present three central challenges: limited human visibility, lack of data and system integrity and insufficient monitoring mechanisms.
As AI becomes autonomous, organizations must strengthen frameworks for privacy and accountability.
Incomplete or uneven data increases the likelihood of inaccurate outcomes and operational disruptions.
Traditional monitoring tools are often inadequate for tracking evolving behavior, which makes continuous observability essential.
Among the many EY Agentic AI risk domains, the three most crucial for enterprises are: strong observability, rigorous testing and clear human oversight.
Organizations should be able to explain not only the 'what' of the AI system and its decisions, but also 'who' is responsible when something goes wrong. This is especially important because Agentic AI systems operate autonomously
Abbas Godhrawala
Partner, Risk Consulting, EY India
For your convenience, a full text transcript of this podcast is available on the link below:
Pallavi
Hello and welcome to a new episode of the EY.AI podcast series, a part of the EY India Insights podcast, where we explore how artificial intelligence is reshaping businesses and redefining possibilities.
I am your host Pallavi and today we are diving into one of the most important topics and emerging conversations in AI - Building a risk framework for Agentic AI.
As AI systems evolve from being assistive to becoming autonomous, organizations need a robust framework to manage new dimensions of risk, from data privacy and bias to accountability and control.
To help us unpack this complex but timely topic, I am joined by Abbas Godhrawala, Partner with EY India. Abbas leads digital and technology risk advisory and has deep experience in helping organizations strengthen their AI governance, risk management and assurance models.
Welcome to the podcast, Abbas. It is great to have you here with us today.
Abbas Godhrawala
Thank you, Pallavi.
Pallavi
Abbas, to begin with a basic question, what sets Agentic AI apart from traditional AI systems and why does it require a distinct risk framework?
Abbas Godhrawala
Pallavi, that is an excellent question. The fundamental difference between traditional AI and Agentic AI lies in autonomy and real time decision making. Traditional AI systems typically operate under human supervision or predefined rules. Agentic AI, by contrast, can take independent actions to respond dynamically to changing environments and interact across multiple systems. This capability introduces new forms of risk, such as unpredictable behavior, cascading system interactions and difficulties in human oversight. Hence, a distinctive risk framework is essential - one that goes beyond static controls and incorporates continuous monitoring, accountability mechanisms, human oversight and transparency as core design elements.
Pallavi
Thank you, Abbas. Pivoting towards challenges, what are the practical challenges that companies face today when implementing governance and control mechanisms for Agentic AI systems?
Abbas Godhrawala
Companies face several challenges, and I will highlight three of the key ones.
First is visibility: once these agents start operating autonomously, organizations may lose sight of how decisions are being made, what data is accessed and how systems evolve.
Second is data and system integrity: Agentic AI typically interacts with multiple data sources and systems; if the inputs are weak, inconsistent, or poorly governed, the risk of faulty outcomes or operational disruption rises. Third is monitoring, which is again one of the critical ones. Many organizations currently use monitoring tools designed for rule-based systems, which are not always fit for the purpose in autonomous or evolving context. Therefore, organization needs to think in terms of observability, audit trails, anomaly detection and governance processes that are adaptive rather than static.
Pallavi
Thank you, Abbas. In the context of evolving global regulations like the EU AI Act and India's governance guidelines, how should companies adapt with AI governance strategies?
Abbas Godhrawala
Regulations across the globe are evolving rapidly and organizations must shift from reactive compliance to proactive governance. For example, the EU AI Act places a strong emphasis on transparency, accountability and higher scrutiny over high-risk systems. We have recently got India AI guidelines published, these guidelines focus on mitigating the risk of AI for individuals and society.
Practically this means embedding compliance by design - ensuring governance, accountability and controls, are already in place when the system is built and deployed. Organizations should be able to explain not just the ‘what’ of an AI system and its decision, but the ‘who’ is accountable if something goes wrong. This is important for Agentic AI applications that operate autonomously.
Pallavi
Thank you, Abbas. Among the domains in EY Agentic AI risk framework, which do you see as most critical for organizations beginning with their AI journey?
Abbas Godhrawala
While looking at the overall framework, I would recommend three foundational domains. First is observability: being able to see and understand how an agent behaves, what data it uses and how its decisions evolve over the time is very important. Without this visibility, you cannot build trust or intervene when needed. The second one is evaluation and testing frameworks: we need to simulate real world behavior of the agent, test its response and monitor for unintended outcomes. This will help mitigate risk before it goes live. Third is human oversight, which is very important - the human in the loop approach for high stake decisions, where significant outcomes and consequences are mandated. Together, these three aspects give organizations a strong foundation to scale responsibly.
Looking ahead, it is very important to ensure that all these principles are thoroughly embedded in an overall governance framework and implemented.
Pallavi
Thank you, Abbas. What are the emerging risks or opportunities you foresee as Agentic AI becomes more integrated into business operations?
Abbas Godhrawala
On the opportunities side, Agentic AI opens doors to smarter automation, real-time decision making, enhanced efficiency and new business models. However, it introduces new risks such as adversarial threats - for example, prompt injection, model manipulation – as well as ethical concerns like biased decisions, lack of accountability, and reputational or regulatory fallout, if something goes wrong. The key is to balance innovation and doing it responsibly. Deploying Agentic AI is not just about capability, it is about building trust, embedding transparency and maintaining control. Organizations that achieve this balance will lead and will not lag.
Pallavi
Thank you, Abbas. That brings us to the end of this conversation. Thank you so much for sharing such valuable insights on how enterprises can balance innovation and responsibility in the age of Agentic AI.
Abbas Godhrawala
Thank you, Pallavi.
Pallavi
Thanks to all our listeners. We have heard today that building a risk framework is not just about compliance; it is about creating confidence, trust and long-term value as AI takes on more autonomous role in decision making.
To all our listeners, thank you for tuning in to the EY.AI podcast series, part of EY India Insights. Stay connected with us for more conversations with our leaders on how organizations can responsibly harness the power of AI to shape a better working world.
Until next time, this is Pallavi signing off.
Risk Consulting Services at EY helps identifying and managing risks across domains like Digital Risk, Enterprise Risk, Financial Services Risk, Actuarial & Risk Solutions to drive business success.
Explore AI risk & governance services at EY with trusted frameworks, compliance, audit & controls for AI risk management and ethical, secure AI deployment.