EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
In the latest episode of the of EY India Insights’ Cybersecurity Awareness Month series, Mini Gupta, Partner, Cybersecurity Consulting, EY India, reflects on a pressing issue: the trade-off between the convenience AI offers and control over personal data. She highlights that AI systems analyze everything from prompts and preferences to biometric and financial data in order to give usable results.
The conversation also uncovers real-world privacy concerns that surface, from over-collecting of data to poorly configured systems that are vulnerable to breaches that lead to data being exposed. Additionally, Mini points out that companies cannot afford to keep their data practices a secret and that new regulations like India’s Digital Personal Data Protection Act of 2023 are pushing for greater accountability and transparency.
Key takeaways
Data over-collection and misuse are major privacy concerns
Weaknesses in AI systems can include weak encryption, misconfigured APIs and insufficient robustness against unexpected inputs
Companies developing AI tools must establish trust as a key differentiator for their users
AI solution companies should prioritize security and transparency as core components
User awareness and practicing digital hygiene are some of the most effective defenses
Organizations that demonstrate the ability to balance convenience and privacy will not only build trust but also gain a competitive edge in the AI era, particularly in India, where users are becoming increasingly aware of privacy issues.
Mini Gupta
Partner, Cybersecurity Consulting, EY India
For your convenience, a full text transcript of this podcast is available on the link below:
Pallavi
Hello and welcome to the new season of EY India Insights’ Cybersecurity Awareness Month series. I am your host, Pallavi and today we are exploring a topic on everyone’s mind; AI convenience and how secure is your data when you use AI?
Joining us today is Mini Gupta, Partner in Cybersecurity Consulting at EY India. With over 23 years of experience, Mini has advised Fortune 500 companies, government agencies and leading Indian enterprises on data privacy, cyber risk and digital trust. She is a recognized leader in AI governance, data privacy and data protection and security. She has led various transformative projects that helped organizations secure sensitive data and maintain data privacy while adopting AI.
Mini, it is a pleasure to have you on the show today.
Mini Gupta
Thanks, Pallavi. Pleasure to be here as well.
Pallavi
Mini, to begin with, when we use AI powered services, how is our personal and organizational data used? And what are the biggest privacy concerns, especially in the Indian context?
Mini Gupta
When we use AI powered services, whether these are simple aspects such as ChatGPT answering queries, Netflix recommending shows, series setting reminders, or a bank's AI flagging fraud, our personal and organizational data fuels these systems. Now, depending on the type of AI, the data use could vary.
For example, generative AI may analyze our prompts, predictive AI in healthcare or insurance, would crunch sensitive records to forecast risks and computer vision systems in Aadhaar KYC process biometrics. So, the common thread across all, you will see, is our data, which is at the heart of it all. And that is where these risks or privacy concerns kick in.
For example, data misuse or over collection. AI systems could take much more data than is needed. There could be consent gaps where users are not always given real choices. Another example could be cross-border flows. A lot of Indian data is stored on global servers which raises sovereignty questions. But let us go deeper into the security side because that is often overlooked.
So, for example, we have data leaks from unsecured data sets that could expose millions of records. Another risk could be weak encryption, which means data is not safe at rest or in transit. Another example is poorly configured APIs, which connect different AI systems and may accidentally make sensitive information accessible and vulnerable, if the APIs are not configured securely.
Imagine a hospital using AI diagnostics. If the patient data is not encrypted end to end, a leak could have massive consequences. A fintech app relying on AI for loan approvals; if its APIs are not locked down, sensitive credit data could be exposed in minutes. Even retail platforms that recommend products; if they over track user behavior, they risk eroding customer trust.
So, if you look at it, while AI is powered with a lot of convenience, there are concerns around privacy and security with the usage of AI. So, the takeaway is that AI is powerful, but it is only as trustworthy as long as we handle data.
Pallavi
AI brings a lot of convenience, but often at the cost of control over our data. So, how do you see this privacy paradox shaping the user trust in AI?
Mini Gupta
That is a good question, Pallavi. If you look at it, the so-called privacy paradox really sits at the heart of AI adoption. On one hand, AI makes our life incredibly convenient, with food apps being able to predict what we will order next even before we do, or the bank flags fraud before we even notice it, or, several utilities and tools that save hours of work, but on the other hand, all of this convenience comes at a cost. We often give up the control over our data, in terms of how it is collected, stored, processed, or even shared.
Let us break down each of these elements. When we look at data collection, AI systems gather vast amounts of personal data, sometimes more than we realize or more than what is required. These include not just identifiers, but also our locations, our habits, our preferences and even sensitive information like health or financial data. Let us look at the second dimension, storage. Where and how the data is stored matters. A weak encryption or centralized databases can become prime targets for breaches. If we look at transfer, many AI services rely on global cloud infrastructure and that is a regular practice. However, that means Indian user data may cross borders which again comes into the way, if there are regulations needing data localization. So, one needs to look at that element and one needs to look at whether this global infrastructure is secure and whether it is in jurisdictions that are protected and have secure practices as well.
Now, if you look at the next dimension around data, that is data processing. AI systems often analyze and combine data in ways that are not transparent, creating risks of profiling, bias and unintended exposure. All these layers, if you look at them, influence user trust. So, if people feel they are constantly trading privacy for convenience, without clear boundaries, then obviously the trust gets eroded.
Imagine using an AI powered health app. It is convenient, but if you worry about your medical records being shared or stored insecurely, you will hesitate to adopt it fully. Thankfully, in India, the Digital Personal Data Protection Act, 2023 is designed to give individuals more control at each of these stages. For example, consent for collection or transparency, storage and processing or clarity on cross-border, data transfers is something that the Act clearly calls out.
But obviously, one needs to look at the effective implementation of the same, because that is what will create the perception and the trust. So,trust is becoming the currency for AI adoption, if you look at it. Companies that are transparent allow users to make granular choices, secure data end to end and limit sharing of data responsibly are the ones that are building real loyalty. Anything around breaches or opaque practices, even minor ones, can quickly undo trust.
The privacy paradox is not just a tension, it is, in fact, an opportunity. Organizations that prove that convenience and privacy can coexist will not only earn trust but gain a competitive advantage in the AI era, especially in India where users are increasingly getting privacy aware.
Pallavi
Thank you, Mini. How transparent are AI companies about how they collect, store and use data? And how are they evolving data protection regulations in India and globally driving greater accountability?
Mini Gupta
When it comes to transparency in AI, the truth is we are still in a bit of a gray zone. Some AI companies are very open about how they collect, store and use data. They publish privacy policies, explain model training practices and offer users some control. But many others often keep the details vague, using broad statements like, ‘we use your data to improve our services’ without really explaining what that means or what data is being collected. How long is it being stored? What does improvement of services mean? How is it being used? Is it being shared with third parties, etc.? So, from a user's perspective, that can feel like a black box. You are getting a powerful tool, but you do not know what happens to your data behind the scenes and that is why trust and transparency are becoming the key differentiators for AI companies today.
The evolving data protection regulations are starting to change this landscape, both in India and globally. In India, the Digital Personal Data Protection Act, 2023 requires organizations to clearly communicate the purpose of data collection, limit processing to that purpose and give users the rights like access, collection and ability to withdraw consent. Even globally, frameworks like the EU GDPR and the California Consumer Privacy Act (CCPA) are forcing companies to be far more accountable for how they handle personal data.
We also have regulations which are governing AI across the globe, say, with the EU AI act and the other upcoming Acts that we are looking at. The effect is companies are being pushed to adopt privacy by design, security by design and good governance by design principles and maintain clear record of processing activities. They are also mandated to report breaches promptly.
So, in other words, transparency is not just a nice (thing) to have anymore. It is now getting legally mandated and increasingly expected by the users as well.
Pallavi
Thank you, Mini. For organizations adopting AI, what do you see as the most significant cybersecurity risks and how can they strike the right balance between convenience and strong security?
Mini Gupta
When organizations adopt AI, the benefits are huge; automation, faster decision making and customer experiences. But then there are risks associated with it. There is no denying that cybersecurity risks and privacy risks come with some of these conveniences that we get. The reality is AI systems handle massive amounts of data which is sensitive in nature and this makes them the prime targets for attackers. So, let us look at some examples, data breaches where sensitive customer, employee or financial data exists are rampant and these are the points where exposure can happen. Similarly, we are hearing about model attacks like adversarial inputs that trick AI into making wrong decisions, which can have operational as well as reputational consequences.
On the other hand, an AI system is often integrated with several upstream and downstream systems. So, misconfigured APIs, which often serve as gateways between AI systems and the other enterprise platforms, if left unsecured, can leak a lot of critical information.
You look at supply chain vulnerabilities (where) in certain cases, (organizations) often depend on third party models or data sets or cloud infrastructure. So, how are they ensuring that the entire supply chain is also secure? Organizations need to really balance convenience and strong security. It is about building security into AI right from the start, rather than treating it as an afterthought.
Some of the key approaches that organizations could look at, for example, they could look at privacy by design, collect only what is needed, anonymize sensitive data and enforce strict access controls, that brings in some control over risk exposure.
The other element is adopting encryption, both at rest as well as in transit to protect the data, even if it is intercepted. Organizations can look at API security and monitoring, having regular audits, rate limiting and authentication checks, to prevent any unauthorized access. Similarly model governance and monitoring, track how these models are performing, detect anomalies and enable updates that do not introduce any vulnerabilities as well.
The other element is around employee awareness and policies. Humans are often the weakest link, so training and clear processes are critical. The key is to make security part of the AI workflow, not a separate hurdle. Having a strong, responsible AI framework and security practices, along with privacy practices well ingrained within the overall framework.
This way, organizations can enjoy the convenience, speed and insights that AI offers without compromising trust or exposing themselves to costly breaches. The best approach is secure by design. Convenient by default.
Pallavi
Thank you, Mini. With AI advancing so rapidly, what are the new approaches or technologies do you expect in cyber security to stay ahead of emerging threats?
Mini Gupta
AI is evolving at an incredible pace and naturally, even the cyber security landscape has to evolve just as quickly. What is interesting is that cyber security itself is now leveraging AI to stay ahead of the emerging threats. Traditional approaches, like signature-based detection or rule-based systems are not enough. When attackers themselves are using AI to launch, much more sophisticated attacks.
Certain trends that we see include, for example, AI driven threat detection. Now, this is where systems can analyze massive volumes of data and real time, identify patterns and flag anomalies before they turn into breaches. Think of it like having a digital immune system that adapts as threats evolve. The other elements are around behavioral analytics. So, this monitors user and system behavior to spot deviations that might indicate an attack, even if the malware itself is brand new.
Another trend is around automated incidence response, where AI can not only detect threats but also take immediate, preventive, remedial and containment actions, such as isolating affected systems or applying patches wherever feasible, or even triggering alerts without waiting for human intervention. Another trend in cybersecurity, where we see AI playing a big role is AI for penetration testing, simulating attacks to identify vulnerabilities continuously rather than waiting for the regular scheduled security audits.
So, in addition, we are seeing security by design becoming more integral where AI systems themselves are built with privacy and resilience at the core, rather than as afterthought. And like we have said before, the global frameworks, like GDPR, CCPA and India’s DPDPA are also helping drive accountability along with the EU AI Act kind of regulations, which are forcing companies to integrate these advanced protections.
So, the key takeaway is that cybersecurity is no longer reactive. It has to be predictive but adaptive as well. As AI attacks are getting smarter, so are cyber security measures becoming more and more smart while adopting AI.
Pallavi
Thank you, Mini. Finally, what simple but effective steps can individuals and businesses take today to safeguard their data while still enjoying the benefits of AI?
Mini Gupta
While AI offers huge benefits to stakeholders, individuals and businesses, they need to adopt these practices. While we have spoken a lot on businesses, maybe from an individual perspective, certain things will go a long way. It could include things like being more mindful of what we share with various AI apps and tools that we use. What is the data that they really need? So, think before we click ‘allow’. Frankly, we should not be in a rush to upload our data, pictures, etc., on any AI tools, just because we see this in rampant (use) across the board or in our friends and families, without really understanding the privacy and security implications of the same.
Another element that we should understand is very basic, but it is still most effective. It is around using strong passwords and having multi-factor authentication in place. We should also keep devices and apps updated. Patches often fix security vulnerabilities before attackers can exploit them. So, it is very important for us to realize what the missing patches are and that may be there on the devices and apps that we are using? And how do we make sure that it is kept at the latest and updated? Another element that we typically overlook is probably around the security and the privacy settings that many of these AI platforms offer. So, by default, they may be set in mode of least security or least privacy, but many AI platforms offer controls around data sharing, retention, personalization and those are something that users and individuals can use to their advantage.
These are some of the measures for the individuals as far as on the business side, like we have already covered. Just to summarize: minimize data collection, only gather what is really necessary for the AI system to work and anonymize as much as possible for having the AI work effectively. Encrypt data both at rest and in transit to prevent leaks, secure APIs and third-party integrations; they are often the weakest link. Regularly monitor and audit AI models, make sure they are performing as expected, not exposing sensitive information inadvertently. Train employees; human error is one of the biggest risks. Awareness and clear protocols are critical and it is very important to adopt Responsible AI frameworks and practices, with AI security and AI privacy strongly embedded in the same.
The key takeaway is you do not have to choose between convenience and security. By following some simple steps, individuals and businesses can enjoy the power of AI, safely without really compromising on convenience, but ensuring total trust while doing so. At the end of the day, awareness, basic hygiene and the right design are the most powerful defenses that anyone can put in place today.
Pallavi
Thank you, Mini. That brings us to the end of this conversation. Thank you once again for joining us and sharing all your valuable insights. It is clear that while AI is transforming the way we live and work, safeguarding data must remain a non-negotiable priority.
Mini Gupta
Thank you so much, Pallavi. It has been wonderful to have spoken on this important topic.
Pallavi
Thank you. To all our listeners, do not miss the other episodes in our Cybersecurity Awareness Month series on EY India Insights. Until next time, stay secure and stay informed and keep your data safe.
Discover how EY's cybersecurity, strategy, risk, compliance & resilience teams can help your organization with its current cyber risk posture and capabilities.
Data protection & privacy services at EY ensures data security, lifecycle management, compliance frameworks, risk assessment & strategic privacy solutions.
Discover how EY's identify and access management (IAM) team can help your organization manage digital identities for people, systems, services and users.