The dilemma of trust — why AI’s strength is also its weakness?
By 2025, an estimated 0.75 billion applications will be created using LLMs, automating 50% of the digital processes. This shift marks a new era for LLM security, emphasizing the need to safeguard systems, applications and business against malicious actors.
With the growth of AI, there is a higher risk of model vulnerabilities being exploited by malicious actors. As AI adoption increases, so do AI-powered cyber threats. The wider the use of AI, the larger the attack surface, making security a must-have. The researchers at an AI cybersecurity firm noted a 135% rise in LLM-powered phishing attacks in 2023, showcasing various cases of LLM misuse.
Recently, security researchers discovered that GenAI chatbots could be manipulated through indirect prompt injection attacks, potentially allowing third-party attackers to distribute malicious documents and emails to target accounts and compromising the integrity of the responses.
Data leakage in LLMs is a significant concern, due to various factors. From simple prompt injections to data poisoning, LLMs can be exploited in multiple ways. They may leak exact snippets of their training data (training data regurgitation), reveal information through clever prompts (prompt hijacking), or be manipulated through carefully designed attacks such as model-based parameter manipulation. The challenge is not just building smart AI but securing it against ever-evolving threats.
Nations have started developing AI regulations – most notably the European Union’s AI Act (2024), which applies to both EU member states and non-EU entities offering AI systems within the union. Despite these efforts, the AI regulatory landscape largely remains insufficient, creating challenges in intellectual property, accountability and ethical AI considerations. Critical issues such as the ownership of AI-generated content, liability for misinformation and potential societal impacts remain unaddressed.
In India, the AI regulatory landscape lacks specific codified laws, resulting in LLM compliance challenges. However, two pivotal frameworks are guiding technological development: The National Strategy for Artificial Intelligence and the Principles for Responsible AI. These frameworks represent initial steps toward developing a structured approach to AI governance in enterprises.
While regulatory frameworks lay the groundwork for responsible AI development, ensuring AI security remains a challenge. Beyond policies and technical safeguards, the true test of AI security lies in the human oversight.
For all its complexity, encryption and obfuscation, the weakest link in AI security is not AI itself but the humans behind it. AI is perceived as intelligent, leading to overreliance and trust. However, AI does not "understand" security—it merely follows rules, which can be manipulated.