EY helps clients create long-term value for all stakeholders. Enabled by data and technology, our services and solutions provide trust through assurance and help clients transform, grow and operate.
At EY, our purpose is building a better working world. The insights and services we provide help to create long-term value for clients, people and society, and to build trust in the capital markets.
Securing the future: Navigating AI risks in an evolving digital world
Explore top AI security threats: adversarial attacks, data poisoning, model inversion & how a lifecycle, governance-first approach can safeguard innovation.
In this episode of the EY India Insights Podcast, we explore one of the most pressing challenges in today’s digital landscape: AI Security. As artificial intelligence becomes integral to how organizations operate, it also brings new risks that need proactive attention. Mini Gupta, Partner, Cybersecurity Consulting, EY India, shares her perspectives on managing these risks while balancing innovation, ethics, and governance.
Mini outlines key AI security threats such as adversarial attacks, data poisoning, and model inversion, where attackers exploit AI systems by subtly manipulating data or models. She emphasizes the importance of a lifecycle approach—embedding security from the design stage to deployment and beyond, and ensuring strong access controls, monitoring, and responsible AI governance.
Key takeaways:
Top AI security risks include adversarial attacks, data poisoning, model inversion, and zero-click vulnerabilities.
AI security must follow a full lifecycle approach—starting from secure data sourcing and model design to deployment and monitoring.
Future trends in AI security include AI-powered threat detection, formal verification for AI systems, secure AI supply chains, and stricter regulations like the EU AI Act and India’s sector-specific guidelines.
CXO priorities should include aligning AI security with business risk, embedding governance, prioritizing explainability, managing third-party risks, and cultivating a culture of shared accountability and ethical innovation.
Building a culture of AI safety involves cross-functional education, celebrating responsible AI actions, encouraging open questioning, and regular simulation drills.
Organizations that will lead in AI security are the ones who can innovate confidently while embedding safety, ethics, and governance into everything they build.
Mini Gupta
Partner, Cybersecurity Consulting, EY India
For your convenience, a full text transcript of this podcast is available on the link below:
Pallavi
Hello and welcome to the EY India Insights podcast, where we bring you expert perspectives on the trends shaping businesses, industries, and the world at large. I am your host, Pallavi.
In today’s episode, we are exploring one of the most complex areas in the digital landscape, AI Security. As artificial intelligence transforms the way we work, connect, and grow, it also introduces new dimensions of risk that organizations must understand and manage effectively.
Joining me, for this conversation is, Mini Gupta, Partner and National leader for Data Privacy and Data protection, Cybersecurity Consulting at EY India. Mini brings over 20 years of experience in Technology Risk Management and Cybersecurity, and leads cyber transformation initiatives across India, Africa, and the Middle East.
Mini, thank you for joining us.
Mini Gupta
Thank you, Pallavi. The pleasure's all mine.
Pallavi
Starting off with a basic question, what are the top AI security risks that organizations face today, including vulnerabilities in AI models? How can companies safeguard these systems throughout the lifecycle while balancing innovation with ethical AI practices and robust governance?
Mini Gupta
Today, some of the top AI security risks include threats to both the data as well as the models themselves. So, one major concern is adversarial attacks, where slight changes to inputs, sometimes even imperceptible ones, can cause AI models to make incorrect or even dangerous decisions. Think of something as simple as modifying a stop sign image, and suddenly the model thinks it is a speed limit sign.
Then, there is data poisoning, where attackers tamper with the training data. So, the model learns something wrong from the very beginning. It is all about what it is learning. So, if that is tampered, that in itself impacts the foundation. It is especially risky in systems that retain automatically or retrain automatically on new data. We are also seeing more model inversion and model extraction attacks, where adversaries can reconstruct private training data, or steal the model’s functionality just by interacting with it.
This is a huge risk if you are working with proprietary models or sensitive information. And you know, more advanced AI integrations that we have had with some of the leading AI agents, we do have newer threats around the zero click vulnerability. This allows attackers to access sensitive data without any user interaction at all. So, these risks are evolving. And obviously they are involving fast, and security has to keep up with this.
Pallavi, the second part to your question was, how do you protect against all of this. It really takes a full lifecycle approach. You cannot just secure AI at the deployment stage.
It starts at the design phase itself, where we need to ensure that there is usage of secure and well-curated data sets, validating the model assumptions and performing adversarial and red team testing before go-live. So, it is the whole lifecycle that needs to be looked at. Now, once deployed, strong access controls and ongoing monitoring, as well as audit logging, become essential.
And you want to be able to detect and respond to unusual behavior quickly, whether it is from users’ systems or the model itself. So, monitoring for model drift and updating controls over time is also key at the same time. Companies need to balance innovation with — what we see coming up big time — around AI governance and ethics, which means that organizations need to have things like responsible AI practices, policies, and frameworks in place.
There should be clear documentation about how these models are being trained, what data are they using and what are their limits as well as looking at transparent processes and AI governance boards that review high-impact use cases. Overall, it needs to be a culture that encourages cross-functional collaboration between the various stakeholders, throughout the lifecycle.
So, whether they are data scientists, security teams, legal teams, (or) ethic leads, it is about ensuring collaboration of all of them, along with the developer ecosystem. And let us not forget the people’s side —ongoing training for teams is also critical so that everyone understands the potential of AI as well as the risks associated with the AI tools that they are working with.
So, in short, AI security is not just a technical challenge, it is a question of trust, accountability, and resilience. And the organizations that will lead in this space are the ones who can innovate confidently while embedding safety, ethics, and governance into everything they build.
Pallavi
Thank you, Mini. Now, moving on to very industry specific questions. As you know, AI security threats vary for different industries. So, how are industries like healthcare, finance, telecom, retail, public services, and manufacturing uniquely impacted by these threats? And what are the steps that they can take to stay ahead of these kinds of risks?
Mini Gupta
The reality is AI security cannot be a one-size-fits-all issue. Every industry is adopting AI in different ways. This also means that each one would be having its own set of unique risks, own set of attack surfaces, and of course, combined with regulatory pressures. Because there are sectors which are regulated versus those that are maybe not, let us break it down with some examples across sectors.
If you look at healthcare, AI is being used for diagnostics, patient monitoring, and even for treatment recommendations in some cases. Now, that is incredibly powerful, but is also at a high risk. If there is a manipulated input or a poisoned training data set, it could lead to mis-diagnosis or incorrect treatment suggestions. So, adding to the sensitivity of health data and the threat model inversion or data leakage, it then becomes a serious concern. So, these kinds of threats increase the risk altogether.
Similarly, if you look at the financial services sector, AI drives fraud detection, possibly credit decisions, and even trading algorithms. Now, these systems are prime targets for manipulation, whether it is around extracting model logic to game the system or injecting bias that affects loan approvals. So, when financial decisions go wrong, the ripple effects are of course immediate and expensive.
Again, these add to the overall risks associated with the sector. Now, if you take another example around telecom, the telcos use AI to optimize network, or possibly to detect outages, or even in terms of enhancing customer experience. But, if you look at it from a risk point of view, attackers could potentially disrupt most of the services by targeting AI models they use for traffic routing or load balancing, which has an impact, beyond, because it is all consumer impact.
So, this could result in real world downtime as well as communication blackouts. If you look at this, all these sectors are impacting individuals or consumers at large. Hence, any risk associated with the usage of AI impacts has a much larger impact, given the volume of consumers that they are dealing with.
Similarly, when you look at retail, AI-powered recommendations, inventory management, dynamic pricing… these also opens the door to algorithm manipulation, like users’ gaming pricing engines or influencing product visibility, which can impact revenue or favor one against the other and obviously impacting brand trust eventually.
On the other hand, if you look at public services and government agencies, they are trying to use AI for everything, from benefits eligibility to even predictive policing. These are deeply tied to public trust and fairness. So, if an AI model is biased or compromised, it could lead to people being unjustly denied access to essential services or the worst case being targeted unfairly as well. Such far-reaching impact in that case.
And finally, if you look at manufacturing, AI plays a big role in elements such as robotics or even predictive maintenance and supply chain forecasting. Now, a compromised AI system here could potentially disrupt production lines, damage equipment, or, in fact, even create a risk to physical health and safety.
So, each of these sectors that you have talked about, Pallavi, could have a much far-reaching impact in the case of any risks associated with AI.
So, what can organizations really do to stay ahead?
Like I said, it starts with the full cycle approach. So, they need to ensure that the entire data pipeline is secure, which means validating sources, detecting poisoning, and using privacy preserving techniques like differential privacy becomes critical. Again, elements like adversarial testing, red teaming, etc., right from the start is essential. It should not really be an afterthought.
And investing in explainability and transparency, especially in high impact sectors like we talked about, whether its financial services, healthcare, public services, is again extremely important. So, embedding AI governance structures, including ethics reviews, cross-functional oversight, AI security, and a clear escalation path for risks, become extremely important.
Pallavi
Thank you for sharing such detailed inputs, specifically with examples for sectors. Now, looking ahead towards the future, how do you see AI security evolving over the next 5 to 10 years? And what are the emerging technologies or global regulations that could shape the strategies to mitigate these vulnerabilities?
Mini Gupta
If you look at, say, the next 3 to 5 years, to maybe seven years, I think we are going to see AI security shift from being a niche concern to a core pillar of cybersecurity strategy for most of the organizations, and then with identity, network, and cloud security. So, it is not something which is an afterthought.
But, as I see increasingly, there has been a shift in the mindset, to say, how do we ensure that we are looking at AI risk proactively? We are looking at AI security to evolve. And looking at it being a part of the overall life cycle. So, first things first: AI is going to get more embedded, not just in regular individuals using various tools and utilities but also embedded in critical infrastructure, powering decisions in various sectors, like we talked about. Whether it is healthcare, energy grid, power utilities, financial services, defense, you name it. So, it could be seeing usability and applicability across sectors, which means the consequences of AI being attacked or manipulated are going to have much higher stakes.
We are likely to see a rise in AI targeted cyber-attacks, everything from model poisoning to prompt engineering, model theft, or zero click vulnerabilities as well. So, we are seeing attacks being powered by AI as well. So, it is not just AI models that are being attacked, but AI is being used to build some of those very sophisticated cyber-attacks as well.
At the same time, AI systems are getting more autonomous. As we move from narrow AI tools to AI agents that make decisions and take actions in real time, the attack surface also expands. So, we need new ways to validate, monitor, and constrain autonomous behavior, and not just detect problems after the fact. So, how do I see this evolve? Basically, see a few major shifts.
One is built-in AI security. Right now, if you look at it, AI security is often bolted on after deployment. In the future, I would assume that security will be baked in from the model development stage, with tools that can simulate attacks, best model robot robustness and verify data integrity. As with DevSecOps that we have, but for AI/ML.
The second shift, in fact, is something that is already happening: AI to defend AI. Now, ironically, some of the best defenses may come from AI itself. We will see more use of AI-driven threat detection to get subtle attacks against other AI systems, putting these various adversarial inputs, model drift or even unusual query behavior in real time.
The third shift could be formal verification for AI. So, just like we use formal methods to prove properties in safe critical code, we will start to see efforts that mathematically verify AI systems to behave within safe bounds, especially in things like autonomous vehicles or healthcare robotics. So, that could be another shift.
One more that we can talk about could be AI supply chain security. As models become more complex and collaborative, we will see a focus on the AI supply chain, ensuring that training data, pre-trained models and even third party APIs have not been really tampered with. So, there is security around that. Think of it like Software Bill of Materials (SBOM) but for AI/ML.
So, these are some shifts that we are expecting. One of the major drivers could also be regulations. Globally, we are already seeing groundwork being laid with the EU-AI act, which is pushing hard on risk classification transparency, and post-market monitoring. And in the US, we have got the next AI risk management framework. And even countries like ours, India and Singapore are developing sector-specific AI guidelines.
While we are looking at security related shifts, there are also regulatory related shifts that are happening, which will see a big impact. AI security will become a lot more proactive, continuous, in line and regulated. And the companies that thrive will be the ones who just do not wait for problems but build secure, explainable, and resilient AI right from the start.
Pallavi
Thank you, Mini. Now, moving on to a very specific question for leadership teams and decision makers. According to you, what are the key priorities that CXOs must focus on when designing a particular AI security framework? And how can organizations cultivate a culture that is centered on AI safety and risk awareness?
Mini Gupta
Getting AI security is obviously not just about tools and tech, but like you said, it has to be a culture. There has to be a top-down push. If I were to look at it from a senior leadership point of view, they need to start asking the why and align with business risk.
AI is not just a fad or something that you want to bring in without really realizing what the benefit is. So, while it's driving real business decisions, CXOs also need to tie AI security directly to the business risk. That could be reputational damage from biased model or it could be a financial loss from a manipulated trading algorithm, or even regulatory penalties from, say, regulation or a privacy breach.
So, the framework should focus on securing the use of AI in business-critical processes, and not just the models themselves.
First things first, link it to the business risk. Then we will see a lot more value. Next, take a full lifecycle view. Too often, organizations focus only on model deployment, but vulnerabilities can creep at every stage like we have talked about.
Look at it right from the beginning to the end, the whole lifecycle needs to be looked at. Whether it is data sourcing, model development, deployment, ongoing evaluation, whatever it is, look at the entire lifecycle.
Third, build governance into the process and not around it. Make it more inline which really means that they need to have clear roles, should have clear responsibilities defined, the escalation path should be clear, and review the points embedded into the AI pipeline.
So, all of these things, whether it is an AI governance board, or AI risk register or automated checkpoints, CXOs need to ensure that security is not a gatekeeper but really a partner in line with what we are doing with our business processes.
Then, what CXOs could look at is probably emphasizing on explainability and traceability, so they could prioritize transparency by design, not just to meet regulatory expectations, but to genuinely build trust, and in fact, innovate and differentiate themselves from the others. So, this includes model documentation, decision logs and clear ownership. When something goes wrong, the ability to trace what happened is also very critical.
The next thing that CXOs should focus on is third-party risks. That is something that they should not forget, because many organizations are now relying on external models, or external data sets, or integration points. So, that is where this AI supply chain, just like with software, needs to be vetted, monitored, and governed. So, procurement processes should now include AI risk checks, and not just financial or technical diligence.
Now, on to the culture.
How do we build the culture around AI safety and risk awareness? Like you rightly said, it starts with the mindset, the tone from the top definitely matters. If leadership treats AI safety as optional or just an engineering problem, that is what the organization will adopt. But if it is seen as a shared responsibility across the board, then it obviously becomes a part of the company's DNA.
So, some ways of reinforcing could include possibly elements such as cross-functional education. Run regular sessions to help teams understand not just how AI works, but even how it can fail and what that means for the business. Second, they could look at celebrating Responsible AI wins. When teams catch an issue early or make a trade-off in favor of safety, then recognize it and do not discourage AI risk identification but in fact make it a success. Make responsive ability part of the innovation story. Third could be to create space for questioning. Encourage teams to raise concerns about how a model is trained or used, without the fear of slowing things down or being labeled as an AI. So, this is really not anti-AI or anti-innovation, it is a culture that can help identify risks that may go unnoticed otherwise.
And of course, building security at all points. Make red teaming and scenario planning routine, which means treat it like fire drills. Ask what if someone tries to game this model, or build scenarios and simulations, wherein users could learn from what they are facing. What if they were going through a prompt engineering kind of an attack? What if they were going through some sort of responses that had bias in them? What if this model makes a wrong call? So, get people thinking beyond the regular, but teach them through simulation.
For CXOs, priorities are clear: Align AI security with business risk, build security into the life cycle, and lead a culture that values trust over speed. Because in the age of AI, how you build is just as important as what you build.
Pallavi
Thank you for joining us and being part of this insightful conversation.
Mini Gupta
Thank you so much.
Pallavi
And to all our listeners, thank you for tuning in for today's EY India Insights podcast episode. To hear more from our experts, don't forget to subscribe and stay connected with EY. Until next time, stay informed and stay secure. This is Pallavi signing off.
Explore AI services & Insights at EY to drive innovation, improve efficiency & enable responsible AI adoption across industries with data-driven solutions.
EY India AI Academy offers GenAI upskilling programs for data scientists, data engineers, and GenAI engineers with role-based learning paths & trainings.