Why organizations need distinct risk framework for Agentic AI

EY.AI podcast explores why Agentic AI needs a new risk framework, highlighting vulnerabilities and key priorities: observability, testing and human oversight.

In the latest episode of the EY.AI podcast series, part of EY India Insights, Abbas Godhrawala, Partner, Risk Consulting, EY India, discusses why Agentic AI (systems that act autonomously and can make real-time decisions) requires a new and more robust risk framework that is different from the frameworks for traditional AI.

He highlights that since Agentic AI can take actions independently, adapt to changing environments and interact across multiple systems, that also increases vulnerability to adversarial threats, unpredictable behavior, cascading system interaction and reduced human oversight. Abbas suggests three foundational priorities for enterprises to adopt from the EY Agentic AI risk framework: observability, testing frameworks and human oversight.

Key takeaways:

  1. Agentic AI systems present three central challenges: limited human visibility, lack of data and system integrity and insufficient monitoring mechanisms.
  2. As AI becomes autonomous, organizations must strengthen frameworks for privacy and accountability.
  3. Incomplete or uneven data increases the likelihood of inaccurate outcomes and operational disruptions.
  4. Traditional monitoring tools are often inadequate for tracking evolving behavior, which makes continuous observability essential.
  5. Among the many EY Agentic AI risk domains, the three most crucial for enterprises are: strong observability, rigorous testing and clear human oversight.
Organizations should be able to explain not only the 'what' of the AI system and its decisions, but also 'who' is responsible when something goes wrong. This is especially important because Agentic AI systems operate autonomously

For your convenience, a full text transcript of this podcast is available on the link below:



Building a risk framework for Agentic AI

Explore more insights from this episode: Read the article for a deeper look at managing risks and driving trust in next-gen AI systems.


If you would like to listen to our podcasts on the go:

Podcast

Episode 30

Duration

8m 58s