Man looking at lights at night

Navigating cyber risks in AI: safeguarding financial services


Deep-dive into critical vulnerabilities that Swiss financial services organizations deploying AI in cloud environments can be exposed to.


In brief

  • AI in financial services increases cyber risks like data breaches, requiring strong governance to protect sensitive information.
  • Model learning risks from AI can lead to biased outputs, mitigated by strict oversight in training processes.
  • Availability issues with immature AI solutions demand robust resilience planning to ensure operational continuity.

With financial services organizations increasingly adopting artificial intelligence (AI) technologies, the potential for innovation and efficiency is immense. However, this transformation brings significant cyber risks that must be carefully managed. Insights from a recent presentation by EY highlight critical vulnerabilities associated with AI deployment in cloud environments, particularly in the context of financial services in Switzerland.

Understanding the cyber risks involved in using AI within your organization

The integration of AI in financial organizations is revolutionizing operations; however, it also introduces new cyber-related threats that demand advanced cybersecurity measures. Therefore, assessing the cybersecurity risks inherent in AI adoption is essential for all financial institutions looking to innovate responsibly.

To illustrate the identified risks, consider CyBank, a fictious mid-size bank that has implemented an AI-enabled relationship manager for corporate banking. The most prominent cyber risks related to the use of that AI-enabled solution are described below:

Risk of data compromise

The vast amounts of sensitive financial information processed by AI systems create a large attack surface, making them attractive targets for cybercriminals seeking unauthorized access. This risk is exacerbated by the decentralized nature of data storage in AI models, which complicates the management and protection of sensitive information. Unauthorized access to this data can have severe consequences, including AI data breaches and identity theft, particularly if robust security measures are not implemented to safeguard against such vulnerabilities.

Model learning risks

As Large Language Models (LLMs) process vast amounts of client interactions and financial data, there is potential for them to inadvertently learn and replicate biased or incorrect information present in the training data. This could lead to flawed business decisions and ineffective recommendations. As a consequence, such outputs from an AI solution could put the integrity and reputation of a business at risk.

Availability issues

Although AI-enabled solutions are becoming more commercially available, the overall market has yet to reach the maturity of other well-established IT systems. While adoption is growing, many organizations still face difficulties in building the capabilities needed to have a solution that is readily available to support its business. Therefore, relying on emerging AI solutions to support or replace essential business processes may introduce a higher risk of downtime. Organizations must carefully assess the reliability and stability of these technologies to mitigate potential disruptions.

What are the best practices for addressing these major cyber risks?

Addressing cyber risks emerging from the adoption of AI-enabled solutions requires careful consideration and a structured approach. More specifically, organizations must establish robust data governance and ethics to ensure data integrity and security, implement strict oversight in LLMs training to prevent model biases and vulnerabilities, and implement strong AI resilience planning to safeguard against disruptions. These three measures are essential for strengthening the cybersecurity posture of your organization when integrating AI solutions within your environment. In the following, we drill down deeper into each of these high-impact measures:

Robust data governance

Establishing strict data governance policies is essential to protect sensitive information when integrating an AI solution within a financial organization. This involves implementing comprehensive access controls to ensure that only authorized personnel can access critical data, thereby minimizing the risk of cyberattack and unauthorized exposure. Additionally, employing encryption protocols for data both at rest and in transit is crucial to safeguard against potential breaches, ensuring that confidential client information remains secure from AI cyber threats. Furthermore, regular audits and monitoring of data access and usage can help maintain compliance with regulatory requirements, reinforcing the organization's commitment to data integrity and protection in the age of AI.

LLMs training oversight

Maintaining diligent oversight over the LLMs training processes of AI models is paramount for financial organizations aiming to mitigate model risks in AI and ensure compliance. This oversight should encompass establishing robust controls that mandate cleaning, transformation and verification of the data prior to being ingested to train the models. More specifically, the cleaning processes should focus on removing outliers and erroneous data. Once cleaned, the data should be standardized and structured to facilitate the training process. Verification steps are crucial to validate the accuracy and reliability of the transformed data, ensuring that the AI model is trained on high-quality, trustworthy information. By integrating these controls, financial organizations can minimize the risk of biased or inaccurate outputs, maintain regulatory compliance, and protect their reputation.

Robust resilience planning

Financial organizations integrating AI solutions should focus their resilience planning on scenarios that test how well their teams are prepared for any actual disruptions. This can be achieved through regular security audits and risk assessments that identify vulnerabilities within AI systems, encompassing both technical aspects of AI deployment and the potential for human error. To enhance preparedness, resilience plans should include clearly defined incident response procedures, designated communication channels, and include backup systems for the teams to fall back on in case the AI solution is disrupted. By proactively preparing for potential disruptions, financial institutions can minimize the impact of AI system failures on operations and client service.

Summary

As financial institutions increasingly adopt AI solutions, they must also address the associated emerging cyber risks. Key concerns include data compromise as AI systems handling sensitive financial information become attractive targets for cyberattacks; model learning risks given that AI models can produce biased or inaccurate outputs affecting business decisions; and availability issues due to the immaturity of AI solutions. Mitigating these risks requires organizations to focus on robust data governance, strict oversight in AI model training and comprehensive resilience planning to ensure security, reliability and compliance. By implementing these measures, financial institutions can leverage AI effectively while minimizing potential threats.

Related Content

As technology and risks evolve, how will AI tools elevate your cyber team?

Unlock your cyber team's potential with EY's four AI personas. Enhance effectiveness and prevent threats by integrating AI tools today.

Find out more

EY position paper on Artificial Intelligence (AI): AI-generated content in transition – between progress and fatigue

AI-generated content is shaping the digital space – an overview of the opportunities, challenges and the role of collaboration between humans and AI.

Find out more


About this article

You are visiting EY ch (en)
ch en