EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
Navigating innovation and data privacy for children in the age of AI
Listen to our Cybersecurity Awareness month podcast on intersection of artificial intelligence (AI), DPDP Act and ethical considerations on data privacy for children.
In the latest episode of the of EY India Insights’ Cybersecurity Awareness Month series,Lalit Kalra, Partner, Cybersecurity and National Leader-Data Privacy, EY India throws light on the crucial intersection of artificial intelligence (AI), the Digital Personal Data Protection (DPDP) Act and the ethical considerations surrounding data privacy for children. As AI continues to revolutionize various sectors, including education and business, it also raises significant ethical questions regarding governance and the protection of vulnerable groups, particularly children.
Lalit outlines practical strategies organizations can adopt to enable compliance while leveraging AI's transformative potential, adopting robust governance frameworks, ethical AI development and creating a culture of privacy within organizations.
Key takeaways
Integrating privacy by design into AI systems can help organizations enable compliance with the DPDP Act as they innovate.
Establishing centralized governance across the AI lifecycle is essential for identifying security risks and implementing algorithmic accountability.
Moving beyond generic consent options to tiered consent models can empower users and enhance transparency in data usage.
AI can improve privacy protection through smarter consent tools, automated compliance monitoring, and advanced privacy-enhancing technologies.
Organizations in education must implement strong security measures and secure parental consent for collecting children's data per the DPDP Act.
AI-powered educational tools can enhance children's learning by personalizing content while ensuring privacy safeguards for sensitive information.
Strong collaboration across legal function, technology and compliance teams would help an organization strike the right balance between innovation and compliance.
Lalit Kalra
Partner, Cybersecurity and National Leader-Data Privacy, EY India
For your convenience, a full text transcript of this podcast is available on the link below:
Pallavi
Hello and welcome to EY India Insights Podcast . I am Pallavi, your host for today. As part of the Cybersecurity Awareness Month series, in this episode we are focusing on AI, the DPDP Act, and Navigating Ethical Innovation and Data Privacy for Children as a theme that is foundational to ethical and responsible innovation.
As we all know Artificial Intelligence has the power to transform education, businesses and society. But it also raises critical questions about ethics, governance and the protection of vulnerable groups, especially children. With India’s Digital Personal Data Protection Act setting a new benchmark for data privacy, organizations must learn to balance opportunity with responsibility.
Thank you for having me here. It's exciting to talk about the topic that's very close to my heart.
Pallavi
Lalit, Thank you. To begin with, how can organizations strike the right balance between leveraging AI for innovation and ensuring compliance with India's Digital Personal Data Protection Act?
Lalita Kalra
In today's world, organizations are trying to leverage AI for everything that is possible, wherever it is possible, wherever automation can be done. That's where organizations are trying to utilize. I use cases and as it goes further, there is a lot of personal data which gets consumed by the AI platforms. There is a lot of autonomous decision making that happens, and there is a lot of personal data which gets processed. How can an organization strike a balance between leveraging it for innovation?
DPDP gives us some basic checks that organizations can have to ensure transparency, security and compliance. So, the first is embedding privacy by design into AI systems. Privacy by design as a concept allows an organization to consider privacy at the start of any process or system which is getting built, and by embedding privacy by design into AI systems.
It helps them to integrate key principles like data minimization, purpose, limitation and transparency into the architecture of AI models from the very start. The next step which an organization can take is establishment of a robust AI governance framework. Organizations are today utilizing AI, but very few are taking steps to govern that. Therefore, implementing centralized governance across the AI lifecycle to identify security risks, to identify model risk, and therefore including consent management, risk management and algorithmic accountability would help the organization go a long way in balancing this aspect.
The third piece is modularizing your consent and transparency mechanism. Organizations should move beyond generic ‘I agree’ buttons by offering tiered consent options for core functions and optional features, features and future R&D. Last but not the least is collaboration. Strong collaboration across legal function, technology and compliance teams would help an organization strike the right balance between innovation and compliance.
Pallavi
Thank you, Lalit. Now pivoting to the AI aspect. Do you see AI as an enabler for stronger privacy protection, or does it mostly introduce new risks?
Lalita Kalra
Like with every new technology, there come some risks. However, I view AI as a positive force that can significantly improve the efficiency and accuracy of our work. Like I said earlier, like any new transformative technology which adds efficiency and productivity, it brings forth some risks. But it requires well-defined guardrails to ensure responsible and safe adoption.
And some of the use cases that can be considered by the organizations are in smarter consent and transparency tools with AI powered interfaces. Organizations can simplify privacy notice, personalize consent options, and give more control to users. AI can be used to automatically detect risks and compliance monitoring, essentially continuously monitoring data flows, flag anomalies, and automate privacy activities.
Last but not the least, embedding privacy and sensing technologies. AI can enable advanced privacy enhancing technologies like differential privacy, federated learning and homomorphic encryption into the life cycle, thereby giving organizations an opportunity to upskill and upskill their privacy program and manage the risks which are coming out of the AI adoption.
Pallavi
Thank you, Lalit. Now, with the DPDP Act placing a very strong emphasis on protecting children's data, what specific challenges do you see for the education and the edtech sectors?
Lalita Kalra
Given the volume of children's personal data handled by the education and edtech sectors, it is essential for these organizations to prioritize robust security safeguards. EdTech platforms must obtain explicit and verifiable parental consent, especially for children's data. This is a mandatory requirement under the Act.
The edtech and the education industry can redesign and relook at the personalization engines, the marketing flows which they are currently using, and analytics models and algorithms to align with the purpose, limitation and data minimization principles, thereby giving a more compliant analytics and marketing function.
These organizations should invest in privacy by design training, awareness sessions and workshops to build internal capacity and have a culture of privacy within the organization.
Pallavi
Thank you, Lalit. How can AI different learning tools be designed to protect children's privacy while still enhancing their education learning experience?
Lalita Kalra
AI powered tools are really changing the game when it comes to monitoring. They allow for more efficient tracking of specific requirements and can dynamically adjust to meet individual learning needs, making the experience both smarter and more personalized. They can help in providing dashboards and consent mechanisms that allow teachers and guardians to monitor, approve, or restrict data usage, thereby enhancing trust and accountability while supporting a safe learning environment.
Secondly, one can tailor content as per individual's learning style, language needs and accessibility requirements, especially for children with disabilities or in underprivileged communities, while ensuring fairness and avoiding any biases which come with AI.
Pallavi
Our listeners also would like to know what does “Trust by Design” mean in context of AI, and how can organizations embed this principle from the very beginning of the product development?
Lalit Kalra
That's an interesting question, Pallavi. So ‘trust by design’ is a proactive philosophy where trust is not added as a compliance check point later, but is embedded from the very start of AI system development. This is aligned with the concept of privacy by design that I talked about earlier. Organizations can achieve trust by design by adopting responsible AI frameworks with cross-functional teams and building explainable and bias-free models to ensure that AI utilization within the organization is in line with the regulation, as well as help them strike a balance between innovation.
Pallavi
Thank you, Lalit. Now that we've addressed the challenges and the trust by design concept, could you share some practical steps or best practices for aligning ethical AI development with DPDP Act compliance?
Lalit Kalra
It is a journey that organizations have to take to ensure aligning ethical development in line with the DPDP Act. And the first step that an organization should take is to undertake a risk assessment to determine where do they stand today and baseline themselves. Depending on the outcome of the risk assessments, the organizations can implement consent mechanisms to make all the users and data principals aware of the data which is being collected, and whether it is being used for AI development or not.
The organizations can assess the tools and technology feasibility based on data principles, right response mechanisms. They can follow the principles of data minimization and purpose limitation. What goes a long way in for organizations is establishing cross-functional teams for AI and privacy governance, which ensures that there is adequate training and awareness. And last but not the least, embedding privacy by design into the AI development lifecycle and ensuring that organizations are able to create a privacy aware culture and privacy compliant AI solutions.
Pallavi
Beyond regulatory requirements, how can organizations adopt a more conscience driven approach towards AI governance?
Lalit Kalra
So, a conscience driven approach means more than compliance. It refers to developing a model where ethical responsibility, human centric values, and long-term societal impacts are embedded into design, deployment, and the oversight of AI systems. There are frameworks which guide on ethical AI governance such as AI C2C (conscious to conscience), which essentially has three phases: AI conscious adoption, which essentially absolute awareness and controlled experimentation. AI plus human intelligence collaboration, which talks about ethical co-creation along with oversight. And AI conscience governance, wherein autonomous systems are utilized with embedded ethical safeguards. Organizations can adopt one of these frameworks to ensure that AI governance is managed well within the organization.
Pallavi
Thank you, Lalit. Now lastly, what role do leadership, culture and awareness play in shaping this transition from compliance to conscience?
Lalit Kalra
From compliance to conscience is a maturity journey. It's like from reactive compliance actions to proactive, thoughtful, conscious decisions during the development of any model or systems. Leaders can demonstrate this by championing responsible innovation. They can embed trust, fairness, and accountability into strategic decisions. And this approach requires a culture that values transparency, inclusiveness, and reflection. Awareness is required at all levels within the organization to make ethical use of AI as a default way to operate.
Pallavi
Thank you, Lalit. Thank you for joining and sharing your insights on navigating the intersection of AI ethics and data privacy. Our key takeaway from this conversation was building trust, especially when it comes to children's data and education, because it's the heart of responsible innovation.
Lalit Kalra
Thank you.
Pallavi
And that brings us to the end of this episode. And thank you to all our listeners for tuning in to the special Cyber Security Awareness Month episode of event in the Insights podcast. Stay tuned for more conversation on the future of trust in a Digital World.
Discover how EY's cybersecurity, strategy, risk, compliance & resilience teams can help your organization with its current cyber risk posture and capabilities.
Data protection & privacy services at EY ensures data security, lifecycle management, compliance frameworks, risk assessment & strategic privacy solutions.
Discover how EY's identify and access management (IAM) team can help your organization manage digital identities for people, systems, services and users.