EY India Updates: Securing AI: Addressing LLM vulnerabilities

How companies can secure language models against emerging AI cyber risks

Related topics

As AI adoption grows, so do cyber threats. Organizations must secure large language models (LLMs) against data leaks and prompt injections. 


In brief

  • The global AI market is projected to reach US$1,811.75 billion by 2030.
  • By 2025, an estimated 0.75 billion applications will be powered by LLM technology.
  • Organizations must implement proactive security measures to mitigate LLM-related information security risks.
  • To harness AI’s potential responsibly, organizations must protect against threats and build a secure, ethical AI ecosystem.

With the rapid evolution of technology, Large Language Models (LLMs) have become integral to our lives, transforming industries and daily interactions. The global LLM market is projected to grow from $1.59 billion in 2023 to $259.817 billion in 2030.

From ChatGPT-inspired creativity to sophisticated business chatbots, AI solutions are reshaping work, communication and problem-solving. As dependency on AI grows, so does automation. With AI's expanding role, AI-driven cybersecurity and compliance standards become crucial to ensuring responsible adoption. This silent transformation underscores the essential nature of AI and LLMs in business, emphasizing the need for secure and ethical implementation.

Organizations across industries are integrating AI into their core business operations to streamline workflows, automate tasks and improve decision-making. By 2026, over 80% of enterprises will have integrated AI automation and GenAI-enabled applications into their core functions. According to the AI in Action 2024 report, 67% of surveyed leaders reported a 25% revenue increase due to AI integration. However, with AI's expanding role, significant security risks must be addressed to fully harness its potential without compromising safety.

The dilemma of trust — why AI’s strength is also its weakness?

By 2025, an estimated 0.75 billion applications will be created using LLMs, automating 50% of the digital processes. This shift marks a new era for LLM security, emphasizing the need to safeguard systems, applications and business against malicious actors.

With the growth of AI, there is a higher risk of model vulnerabilities being exploited by malicious actors. As AI adoption increases, so do AI-powered cyber threats. The wider the use of AI, the larger the attack surface, making security a must-have. The researchers at an AI cybersecurity firm noted a 135% rise in LLM-powered phishing attacks in 2023, showcasing various cases of LLM misuse.

Recently, security researchers discovered that GenAI chatbots could be manipulated through indirect prompt injection attacks, potentially allowing third-party attackers to distribute malicious documents and emails to target accounts and compromising the integrity of the responses.

Data leakage in LLMs is a significant concern, due to various factors. From simple prompt injections to data poisoning, LLMs can be exploited in multiple ways. They may leak exact snippets of their training data (training data regurgitation), reveal information through clever prompts (prompt hijacking), or be manipulated through carefully designed attacks such as model-based parameter manipulation. The challenge is not just building smart AI but securing it against ever-evolving threats.

Nations have started developing AI regulations – most notably the European Union’s AI Act (2024), which applies to both EU member states and non-EU entities offering AI systems within the union. Despite these efforts, the AI regulatory landscape largely remains insufficient, creating challenges in intellectual property, accountability and ethical AI considerations. Critical issues such as the ownership of AI-generated content, liability for misinformation and potential societal impacts remain unaddressed. 

In India, the AI regulatory landscape lacks specific codified laws, resulting in LLM compliance challenges. However, two pivotal frameworks are guiding technological development: The National Strategy for Artificial Intelligence and the Principles for Responsible AI. These frameworks represent initial steps toward developing a structured approach to AI governance in enterprises.

While regulatory frameworks lay the groundwork for responsible AI development, ensuring AI security remains a challenge. Beyond policies and technical safeguards, the true test of AI security lies in the human oversight.

For all its complexity, encryption and obfuscation, the weakest link in AI security is not AI itself but the humans behind it. AI is perceived as intelligent, leading to overreliance and trust. However, AI does not "understand" security—it merely follows rules, which can be manipulated.    

Securing AI in an unregulated landscape

Organizations must prioritize proactive security measures to mitigate LLM security risks. This requires a multi-layered approach focused on security and ethics. Key measures include:

Technical safeguards:

  • Input sanitization is essential for avoiding the influence of harmful or unsuitable content on the LLM's outputs.
  • Securing data through encryption, both when it is stored and during transmission, will block unauthorized entities from accessing it.
  • Continuous monitoring, logging, and testing against potential threats will promptly identify and address any misuse or security gaps.

Awareness and education:

  • To recognize, be aware and stay updated with the latest LLM security tools which security firms are making use of to reduce the potential dangers associated with AI. AI-driven companies are implementing advanced filtering mechanisms to monitor both inputs and outputs, ensuring that models do not generate harmful or unethical content.
  • AI literacy is critical to the responsible adoption of security practices. Organizations should invest in educating users, employees, and stakeholders about AI’s benefits, risks and responsible use.

Collaborative efforts: 

  • In the absence of globally consistent GenAI regulations, industry collaboration and self-regulation become critical to share best practices, develop standardized security protocols, and create ethical guidelines for LLM deployment.
  • A collective approach ensures that businesses align their security efforts and work together to mitigate AI-powered cyber threats, ultimately fostering responsible AI usage across sectors.

Conclusion

As the vulnerabilities of LLM systems become more evident and security breaches grow more dangerous, organizations must establish a robust and secure AI framework like ISO 42001. Responsible AI adoption is not just about innovation—it is about ensuring security, compliance and ethical deployment in an increasingly AI-driven world.


Summary

LLMs are reshaping the digital landscape, driving innovation across industries. Yet, rapid progress raises security and ethical concerns. With limited regulation, organizations must adopt proactive security measures, foster AI awareness, and collaborate on industry standards to harness AI responsibly and mitigate emerging threats.

Related articles

How GenAI can drive innovation in contract management and deliver value

Discover how Generative AI can revolutionize contract management, boosting efficiency, reducing risks, and unlocking new value.

17 Feb 2025 Gaurav Sharma

How AI is activating step changes in Indian education

Discover how AI is transforming Indian education with personalized learning, adaptive tools, and innovation, driving India's vision for Viksit Bharat 2047. Learn more.

31 Jan 2025 Dr. Avantika Tomar

How much productivity can GenAI unlock in India? The AIdea of India 2025

Explore AIdea of India 2025 to find out how Generative AI is revolutionizing industries and productivity in India's digital transformation journey. Stay future-ready with EY India.

14 Jan 2025 Mahesh Makhija + 3

    About this article

    You are visiting EY in (en)
    in en