7 minute read 26 Mar 2024
Tech Trend: Responsible AI
EY Tech Trends series

Tech Trend 05: Responsible AI: Building a sustainable framework

By Kartik Shinde

EY India Cybersecurity Consulting Partner

Kartik has over 20 years of experience and is a leading voice for cyber in the financial services segment.

7 minute read 26 Mar 2024

Show resources

Without adequate controls, adopting AI poses regulatory, reputational, and business risks to organizations.

In brief

  • Responsible AI aims to identify and mitigate bias, ensuring that AI systems make fair and unbiased decisions.
  • Organizations must revamp AI policies, establish trusted frameworks, and undergo trust assessment to adopt AI responsibly.
  • Global AI regulations vary in scope and approach, reflecting the growing recognition of the need to govern AI technologies responsibly.
  • Collaboration among stakeholders is crucial for navigating this complex landscape and realizing the potential of GenAI responsibly.

Recently, a Hong Kong multinational company lost over $25.6 million because of a deepfake video made using AI. The video avatar looked so credible that employees were convinced that they were talking to their CFO during a conference video call and proceeded to execute a series of transactions. Not only did the imposter appear authentic, but it also sounded convincing. In India, too, there are several reported cases of deepfake videos and AI-generated voices, including a recent case of a woman falling victim to an AI-generated voice fraud and losing money. In early 2024, two separate deepfake videos of star Indian cricketers went viral on social media. In the videos, their voices have been manipulated to promote an online game and a betting app. The intense competition surrounding AI development, with countries and companies vying for supremacy, has raised crucial discussions about responsible AI. The ascent of large language models (LLMs) is giving rise to urgent questions on the boundaries of fair use.

The concept of responsible AI is not new. Back in 2016, Big Tech companies banded together to establish a partnership on AI, laying the groundwork for ethical AI practices. However, as the GenAI landscape evolves, fresh and complex challenges are emerging.  

GenAI risks

Risks associated with GenAI, especially in LLMs, include model-induced hallucinations, ownership, and technological vulnerabilities such as data breaches, as well as compliance challenges arising from biased and toxic responses. There have been recent examples of authors being credited with non-existent articles and fake legal cases have been cited by GenAI tools. Inadequate control over LLMs trained on confidential data can lead to data breaches, which, according to a recent EY survey, is the single biggest hurdle to GenAI adoption in India.

Toxic information and data poisoning, intensified by insufficient data quality controls and inadequate cyber and privacy safeguards, adds another layer of complexity, diminishing the reliability of GenAI outputs and jeopardizing informed decision-making. Additionally, the broader spectrum of technology risks of deepfakes to facilitate crime, fabricate evidence and erode trust necessitates proactive measures for secure GenAI adoption.

 Potential intellectual property rights (IPR) violations during content and product creation also raise legal and ethical questions about the origin and ownership of generated work.

Other risks include:

  • Bias and discrimination
  • Misuse of personal data
  • Explainability
  • Misuse of personal data
  • Predictability
  • Employee experimentation  
  • Unreliable outputs
  • Limitations of knowledge
  • Evolving regulation
  • Legal risks

Building guardrails against risks  

To capitalize on the competitive advantage and drive business, GenAI models and solutions are implementing safety guardrails to build more trust. The tech giants have created the frontier model forum. Its objectives include advancing AI safety research, identifying best practices, and collaborating with policymakers, academics, civil society, and companies. The forum aims to ensure that AI developments are handled responsibly and deployed responsibly. A model’s performance is evaluated and measured against designated test sets and quality considerations. Model monitoring and performance insights are leveraged to maintain high quality standards. 

With various models evolving, implementing robust data governance policies that comply with privacy regulations will help companies mitigate risks. There are seven key domains to establish a robust framework and governance processes that align with industry-leading standards of responsible AI. These are business resiliency, security operations, model design development, governance, identity and access management, data management, and model security.

This podcast series aims to explore the fascinating world of Generative AI Unplugged, its applications, and its impact on various industries. Join us as we dive deep into the realm of artificial intelligence and uncover the innovative techniques used to generate unique and creative outputs.

Know more

GenAI  risk governance  and framework

Such a framework assesses an organization’s existing policies, procedures, and security standard documents to determine the adequacy of governance processes and controls associated with GenAI and evaluates implementation effectiveness.

Regulations so far

The growing need for AI regulations has resulted in a complex and diverse array of global regulations to navigate AI risks. China has been a forerunner in designing a new law that focuses on algorithm recommendations, including generative and synthetic algorithms. EU’s AI Act is the first major legislation to stress on a risk-based approach. It categorizes AI applications into different risk levels, ranging from unacceptable to low and with high-risk applications subject to more stringent requirements. The law prohibits AI systems that pose an ‘unacceptable risk,’ such as those utilizing biometric data to deduce sensitive traits like individuals' sexual orientation. Developers of high-risk applications, such as the ones that use AI in recruitment and law enforcement, must show that their models are safe and transparent.

While India is developing its own law, the Ministry of Electronics and IT (Meity) has recently issued an advisory to AI platforms to take permission before launching AI products in the country. The government has asked intermediaries to tag any potentially deceptive content with distinctive metadata or identifiers to trace its source and thus aid in tracking misinformation or deepfakes and its creators.  Meanwhile, in the US, the AI Executive Order directs agencies to move toward adoption with safeguards in place. 

The G20 nations have also committed to promoting responsible AI in achieving the Sustainable Development Goals (SDGs). Additionally, 28 countries, including India, China, the US, and the UK, signed the Bletchley Declaration AI summit, pledging to address AI risks and collaborate on safety research. Also, HITRUST has released the latest version of the Common Security Framework comprising areas specifically addressing AI risk management. Along with the global agreements, Responsible AI needs local regulations as well. 

EY India Tech Trends 2024 Series

EY Tech Trends series is a collection of tech resources, wherein each chapter focuses on the rising shifts in key technology areas and their impact across sectors.

Know more

AI regulations

While governments are working on regulations, Big Tech and industrial bodies are implementing their own set of safeguards, including continuous monitoring and auditing, investing in cyber security measures, red-teaming GenAI models, using frontier AI models, reporting inappropriate uses and bias, watermarking on audio and visual content, and so on. 

Responsible AI adoption: key steps

While countries are framing global agreements and regulations and models are implementing guardrails, organizations must consider several key points in adopting AI safely and responsibly. 

  • Redesign AI policies and design standards.  
  • Build a trusted AI framework for your organizational needs: Decide the type of AI appropriate for your organization, ensuring ethics, social responsibility, accountability and reliability. Creating trust in AI will require both technical and cultural solutions. This framework should emphasize bias, resiliency, explainability, transparency, and performance.  
  • Form GenAI ethics board: Ensure a diverse mix of legal experts, technology leaders, security innovators, and human rights scholars.
  • Perform HITRUST Assessment: Conduct HITRUST certification assessment to demonstrate assurance of the security and operational controls within the AI system. 
  • Train employees: Deliver AI risk management training and ensure technical skill development for employees.
  • Put in place a new data privacy and security architecture.
  • Implement technology and data quality controls: Evaluate controls implemented for AI risk management and review current state to ascertain applicability of the National Institute of Science and Technology AI Risk Management Framework security and privacy requirement. Deploy tools to monitor cyber and data poisoning attacks, data privacy, monitor for hallucinations, manage third-party risks, prompt injections and malicious attacks.

Navigating the complex landscape of responsible AI requires a multifaceted approach. While technological advancements offer immense potential, mitigating associated risks necessitates proactive collaboration among governments, organizations, and global communities. Establishing trusted AI systems, fostering responsible AI development practices, and prioritizing human-centered design are essential steps toward harnessing the power of GenAI for a sustainable and equitable future. The journey toward responsible AI will require continuous learning, adaptation, and a commitment to ethical and inclusive practices.

The AIdea of India: Generative AI's potential to accelerate India's digital transformation.

Know more

Summary

Building trust in AI systems is essential for their acceptance and adoption. The risks associated with GenAI, particularly in Large Language Models (LLMs), include model-induced hallucinations, ownership disputes, and technological vulnerabilities such as data breaches, along with compliance challenges due to biased or toxic responses. Intellectual property rights violations, bias, discrimination, and legal risks are additional concerns. To address these risks, safety guardrails are being implemented, and regulations are evolving globally. Responsible AI adoption involves redesigning policies, establishing trusted frameworks, forming ethics boards, training employees, and implementing robust security measures.

About this article

By Kartik Shinde

EY India Cybersecurity Consulting Partner

Kartik has over 20 years of experience and is a leading voice for cyber in the financial services segment.