EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
What India’s AI governance guidelines mean for businesses
This episode of the EY India Insights podcast explores India’s light-touch AI governance, ethical adoption, enterprise accountability and how businesses can build trust while innovating responsibly.
In this episode of the EY India Insights podcast, Rajnish Gupta, Partner in the Tax and Economic Policy Group at EY India discusses India’s evolving approach to AI governance and what it means for businesses and policymakers. The conversation explores India’s recently released “light-touch, innovation-first” AI guidelines and how they aim to encourage experimentation while ensuring responsible deployment. Rajnish shares insights on human accountability in AI systems, the importance of robust enterprise governance frameworks and the role of regulatory sandboxes in enabling safe innovation. He also compares India’s approach with global regulatory models and outlines key priorities for building trust, reducing risk and supporting ethical AI adoption.
Key takeaways:
India’s “light-touch, innovation-first” AI governance approach enables businesses to experiment and scale AI solutions without pre-approvals, while relying on voluntary safeguards and existing legal frameworks.
Human accountability remains central to AI adoption — organizations, not algorithms, are responsible for outcomes, making clear ownership, contracts and oversight critical.
Enterprises should view AI governance as a risk management function, focusing on data quality, transparency, testing, ongoing monitoring and human-in-the-loop decision making.
Regulatory sandboxes can enable responsible innovation by allowing controlled testing of high-impact AI systems, provided they ensure transparency, non-discrimination and a level playing field.
India’s light-touch approach to AI governance is a conscious bet on innovation. By avoiding heavy pre-approval and emphasizing human accountability, it allows businesses to scale AI responsibly within existing legal guardrails.
Rajnish Gupta
Partner, Tax and Economic Policy Group, EY India
For your convenience, a full text transcript of this podcast is available on the link below:
In today’s conversation, we explore the recently released AI governance guidelines and discuss what they mean for businesses, policymakers and India’s innovation agenda. Rajnish shares insights on how India’s “light-touch, innovation-first” approach can support responsible AI adoption while balancing legal, ethical and operational considerations.
A very warm welcome to you Rajnish and thank you for joining us today.
Rajnish
Thanks for inviting me, Pallavi, for the podcast. Delighted to be here with you.
Pallavi
Thank you, Rajnish. India’s AI governance guidelines are described as “light-touch and innovation-first.” What does this approach mean for organizations and how does it balance growth with responsible AI deployment?
Rajnish
India's approach to AI is actually a bet on innovation. What it really means is that there will be minimal regulatory burdens, codes will be voluntary in nature, and no pre-approvals will be required before companies launch algorithms. What this will do is it will allow businesses to build and scale AI without stifling growth, without going to a regulator and getting approvals. So, you go ahead, build, experiment, compete, and undertake voluntary safeguards based on how you see the risks.
It, in its own way, is also signaling the regulators to keep the frameworks flexible and avoid any heavy compliance requirements. The real balance between growth and responsibility will be achieved automatically, as the people developing the algorithms will bear the full cost and reap the full rewards of their actions.
Pallavi
Thank you, Rajnish. The guidelines emphasize human accountability instead of assigning responsibility to AI systems. What are the practical implications of this principle for companies adopting AI, particularly generative AI?
Rajnish
Machines or algorithms should never be legal persons. Only human beings can be responsible. Only they can be accountable, as they are the only ones who can gain or lose. For any business using generative AI, this means responsibility. What it means is, if the output has a negative impact, then the firm would need to pay either through impact on its reputation or potentially a lawsuit, or it may lose some customers.
From a practical point of view, when people are looking at generative AI, we get questions such as who owns the output; will oversight be excess; in case something is not right with the algorithm, how do you correct? In case there is legal action and there is a need to compensate, how do you take care of it?
Also, the contracts will have to be structured carefully so that the responsibility is clear between the people who are developing the models or providing the models between the integrators, the people who are offering algorithms and the business users, keeping in view the existing IT law, the consumer and the sectoral laws.
Pallavi
Thank you, Rajnish. What should be the key components of an effective AI governance framework for enterprises that covers data quality, transparency, risk assessment, monitoring and human oversight?
Rajnish
When businesses look at governance, they really need to see how the framework would be seen. And I put, three categories of stakeholders –the regulators, the customers, and the courts. So businesses would need to look at this as a risk management function rather than as an area of regulatory compliance.
Some of the things which would need to be taken care of are – the data should be sourced very clearly, very clear and legal. This means that data quality standards need to be maintained, especially the data which is used for training so that the model behavior is able in case comes up. And for algorithms document and share as much as is possible in terms of what are the purpose, what are the limitations, what is the type of training data which has been used. And some guidance regarding what the appropriate use or misuse is, especially for the key customers, while algorithms are being developed to record the design choices and the evaluation results. This can come in handy in case somebody asks questions at a future date. Then, of course, to conduct tests before deployment and to test out for biasness, robustness, and look at various scenarios which are there and then, to monitor on an ongoing basis.
Finally, to ensure that there is a ‘human in the loop’ so that if there are decisions which are taken, which are of great significance, for example, loans or it impacts jobs or healthcare, or law enforcement, then ‘human in the loop’ becomes even more important. In such cases, AI should be seen as a decision support tool rather than something which is giving the final decision.
Pallavi
Speaking of experimentation, regulatory sandboxes are highlighted as a tool for safe experimentation. So how can sandboxes help organizations innovate responsibly? And what checks are needed to prevent misuse?
Rajnish
Before the Information Technology (IT) Act, 2000 came into being, electronic signatures and digital contracts were not legally recognized. But people were transacting, using electronic means. What the IT Act did was to make sure that these are legally permissible. As technology changes, the law may need to be amended, or you may need something new so that it is in line with the way business is conducted.
What the sandboxes do is let firms test very high impact AI algorithms with real users in controlled conditions. So, you know what the benefits are, what are the type of risks that exist, and what are the regulatory changes which may be required for an effective rollout of that algorithm.
So, the algorithms must remain open, they must be transparent and non-discriminatory. The basic risk in sandboxes is that they must be available to everybody. We should not get into a position where only a particular set of players are able to get onto the sandboxes, and the others are waiting for an opportunity for their application to be tested using a sandbox. So, there should be a level playing field and everybody, whether it is an influential player or a non-influential player should be able to get the benefit of the regulatory sandboxes.
Pallavi
Thank you, Rajnish. Now lastly, how do India’s guidelines compare with global AI regulatory trends, and what should be India’s roadmap over the next few years to strengthen trust, reduce risks and promote ethical AI adoption?
Rajnish
India has a very free AI ecosystem at this point of time. The real contrast is taken with the EU AI Act, which is much more detailed, binding, and puts strong obligations on the high-risk systems, and there are specific bans also before it is rolled out.
That is not the case with India. We are saying that the algorithms would be subject to existing laws and regulations. So, it is not that we are in the Wild West, but there is no specific AI related law which is being proposed right now. As far as the roadmap for the next five years is concerned, what we are saying is that if something goes wrong or if the AI is not responsible, then you address it through private litigation using the existing laws and not through any administrative orders or any checklists.
It is really for the markets and the larger ecosystem to define what is Responsible AI, which has to be within the existing legal guardrails that we have in the country. So, the industry can have its own codes of conduct. They can make sure that the standards are met, and there are independent audits, and there is a lot that the businesses can do.
I think going forward, what would be helpful is to strengthen the existing institutions. It is about making sure that people are educated, they understand AI systems better, how they work, how they do not work, so that as and when the cases come up which are related to Responsible AI, they are trained, educated, which results in a faster dispute resolution.
So, trust will take care of it as long as we make sure that there is fair competition. And if anything is not the way it should be, then the existing regulatory system has enough expertise and knowledge to be able to deal with it.
Pallavi
Thank you, Rajnish. That brings us to the end of this episode. Thank you so much for sparing time and sharing such valuable perspectives on AI governance.
Rajnish
You are most welcome.
Pallavi
Thank you to all our listeners. Thanks for tuning in to this episode of EY India Insights podcast. We hope you found Rajnish’s perspectives on AI governance thought provoking and useful as organizations set out to prepare for the next phase of AI-led innovation.
To listen to more such conversations featuring leaders and industry experts, visit ey.com/in or follow the EY India Insights podcast on your preferred audio platform. We will be back soon with another engaging discussion. Until then, this is Pallavi signing off.