AI governance guidelines: A bet on innovation

AI governance guidelines: A bet on innovation

India’s AI governance guidelines take a light-touch, innovation-first approach, promoting Responsible AI while avoiding heavy compliance.


In brief

  • India’s new AI governance guidelines adopt a light-touch regulatory model that prioritizes innovation while maintaining responsible oversight.
  • A national AI incident database and human-accountability focus strengthen safety without adding new compliance burdens.
  • Future impact may depend on timely implementation of sandboxes, copyright reforms, and judicial interpretation as AI evolves.

The India AI Governance Guidelines, recently unveiled by the Ministry of Electronics and Information Technology (MeitY), is a pragmatic, innovation-friendly blueprint that focuses on rapid deployment of AI in India. It is a good example of light touch AI regulation in India.  

Unlike the previous advisories on AI that seemed to suggest a prescriptive and a compliance-oriented philosophy, these guidelines reflect a flexible, innovative and a hands-off approach. Further, in a departure from the approach of EU, which has introduced comprehensive AI regulations, the guidelines in India indicate that no AI specific new law is needed at this stage. In contrast to the EU law, India has not imposed any new restrictions whether in the form of mandatory due diligence requirements, a new regulatory body or any potential bureaucratic hurdles, signaling a distinct AI regulatory approach India.

This patient, wait and watch approach is welcome in a field characterized by rapid, unpredictable advancements. Creation of a national AI Incident Database is a welcome feature that places India among the few countries taking a mature, evidence-based approach This can help calibrate future position based on concrete experience rather than hypothetical fears.

EY Competency Connect - Smarter talent decisions, powered by AI

Explore Competency Connect by EY India – an AI-driven talent assessment platform that enables smart hiring, skill-gap analysis, and workforce planning.

Know more

Fostering innovation

Today’s start-ups are the economic leaders of the future. The AI guidelines nurture a supportive ecosystem for start-ups, reflecting India’s broader AI innovation policy. The guidelines advocate for voluntary codes of conduct, stating "AI governance should foster innovation by minimizing regulatory burdens, relying on voluntary commitments, self-regulation, and adaptive guidelines that evolve with technology, rather than prescriptive mandates that could stifle growth." It is vital that the government ensures these codes remain flexible and avoid becoming onerous for nascent enterprises.
 

The advisory highlights the importance of regulatory sandboxes, noting that they "enable safe experimentation and iterative development in real-world environments without the risk of immediate non-compliance." This approach helps rapidly test AI solutions and take necessary steps to address any negative externalities, accelerating their time to market.
 

The use of copyrighted material in AI training data has been the subject of numerous lawsuits, notably in US by media companies. Regions like the EU, the UK, Japan and Singapore have proactively adopted text and data mining exemptions within copyright laws to support AI training. Similarly, the 2025 Indian AI guidelines say that “Copyright laws may need to be amended to enable large-scale training of AI models, while ensuring adequate protections for copyright holders.”
 

The speed with which the government implements sandboxes or copyrights is the key to success. The government would also need to be vigilant to any regulatory capture that stifles innovation.

AI Academy Empowering GenAI training and talent transformation

EY India AI Academy offers GenAI upskilling programs for data scientists, data engineers, and GenAI engineers with role-based learning paths & trainings

Know more

Human accountability, legal personality of AI and skin in the game

The advisory strongly emphasizes the necessity of human control over AI systems, stating, “Humans should, as far as possible, have final control over AI systems.”
 

This core principle underscores that despite AI’s increasing capabilities, humans remain responsible for guiding and intervening in its functioning.
 

The advisory for the time being has refrained from giving any legal personality to AI. The advisory clarifies that AI systems will not be treated as intermediaries under the Indian IT Act. However, it wants the issue on how AI systems are classified, what their obligations are, and how liability may be imposed to be discussed and deliberate upon.
 

Any In the meantime, complaints resulting from AI use will be addressed through existing laws and regulations and taking into consideration the role of humans and corporates in development, deployment and oversight.
 

This framework aims to balance innovation with responsibility, attributing clear legal liability to humans behind AI.

The judicial system as the final guardrail

Ultimately, these guidelines are not an endpoint but a start. As AI use grows, legal complaints will inevitably increase. Relying on the existing justice system to interpret and enforce laws related to AI negligence is more robust, if a slow solution. It places the ultimate burden on time-tested common law principles, which are more resilient than new, unproven rules.

How the existing regulations, from consumer protection to penal code and sectoral guidelines, are interpreted in the context of rapidly evolving AI applications will be the true test. Only through wider capacity-building and expert-informed, interdisciplinary evaluation can our system provide meaningful remedies and develop robust Indian AI jurisprudence.

While the guidelines prioritize responsible innovation over cautionary restraint, the real test may lie in being able to maintain this stance as AI technologies and associated risks evolve.

Download the report

What India’s AI governance guidelines mean for businesses

This episode of the EY India Insights podcast explores India’s light-touch AI governance, ethical adoption, enterprise accountability and how businesses can build trust while innovating responsibly.

Summary

India’s new AI governance guidelines take a light-touch, innovation-first approach, enabling rapid experimentation through sandboxes and voluntary standards while keeping human accountability central. With features like a national AI incident database, the framework aims to balance innovation with safety, though its impact will hinge on timely implementation and effective interpretation of existing laws. This aligns closely with the AIdea of India: Outlook 2026 report, which charts India’s shift from AI pilots to scaled adoption and highlights responsible AI, GenAI integration, workforce transformation and governance readiness. Together, they signal India’s push toward an AI-native economy where innovation grows within a flexible, accountable governance ecosystem.


Related articles

India’s GCCs are leading the shift to Intelligent, AI-native enterprises

India’s GCCs are evolving from cost hubs to global innovation leaders, driving AI adoption, digital transformation, and enterprise strategy.

Human-centered approach to AI: Paving the way for ethical and sustainable growth.

Learn how human-centered AI fosters ethical decision-making, sustainability in business transformation and technology while upholding the principles of Responsible AI.

Building a risk framework for Agentic AI

Build a robust risk framework for Agentic AI with EY’s multi-layered approach to governance, security, compliance, and responsible AI oversight.

    About this article