Electronic circuit board close up.

Risks and benefits of generative AI in the financial sector


FINMA highlights AI risks and expects financial industry compliance across governance, reliability, transparency and non-discrimination.


In brief

  • Swiss AI regulation: Switzerland regulates AI via tech neutral laws and, after the Federal Council decision of 12 Feb 2025, plans targeted measures by end 2026 based on the CoE AI Convention.
  • FINMA perspective: FINMA guidance 08/24 (Dec 2024) highlights key AI risks for financial institutions and expects active risk management under existing regulatory frameworks.
  • Impact on financial services: AI can enhance analytics, decisions and client service in finance, but sustainable use requires strong governance, human accountability and effective risk controls.

Upcoming regulatory Framework on AI in Switzerland

The Swiss Federal Council has opted for a pragmatic, risk based approach to AI regulation that seeks to leverage AI’s potential to strengthen Switzerland as a center for business and innovation while keeping societal risks as low as possible. Central to this approach is the incorporation of the Council of Europe’s AI Convention into Swiss law, which will apply primarily to state actors. Any necessary legislative changes are intended to be as sector specific as possible, with cross sector rules limited to key areas affecting fundamental rights, such as data protection. In addition to binding legislation, the Federal Council plans to introduce non binding measures, including self disclosure commitments and industry solutions. The regulatory approach is guided by three objectives: reinforcing Switzerland’s innovation capacity, safeguarding fundamental rights, including economic freedom, and strengthening public trust in AI. To implement this framework, a consultation draft is to be prepared by the end of 2026, setting out legal measures in areas such as transparency, data protection, non discrimination and supervision, alongside a plan for complementary non binding measures. International compatibility, particularly with Switzerland’s main trading partners, and stakeholder involvement will be taken into account. The combination of binding and non binding instruments is intended to ensure a robust yet flexible regulatory framework that can keep pace with rapid technological developments1

FINMA Guidance on AI in the Financial Sector

In the FINMA Guidance 08/2024, published on 18 December 2024, FINMA addresses the growing use of artificial intelligence in the Swiss financial sector and highlights the need for robust governance and risk management within existing, technology neutral regulatory frameworks to further strengthen AI governance in finance. FINMA does not consider AI to be inherently high risk, but emphasizes that risks depend on the materiality, complexity, autonomy and use case of individual AI applications.

Based on supervisory findings, FINMA identifies key AI related risks, notably model risks (including lack of robustness, correctness, bias and explainability), data quality risks, IT and cyber risks, legal and reputational risks, and increasing dependencies on third party providers such as cloud and model vendors. FINMA expects institutions to maintain comprehensive AI inventories, apply consistent risk classification, and ensure clear accountability and responsibilities across the AI lifecycle. These are essential components of effective AI risk management.

The guidance further stresses the importance of high quality and well governed data, regular testing and ongoing monitoring of AI systems (including accuracy, stability, bias and data drift), adequate documentation, and sufficient explainability, particularly where AI outputs affect clients, employees or regulatory compliance. For material AI applications, FINMA also expects independent reviews separate from model development. Overall, FINMA signals that it will continue to refine its supervisory expectations in line with international developments, while maintaining a proportionate, principle based and technology neutral approach2

EU AI Act readiness

On 19 November 2025, the European Commission adopted its legislative proposal for a Digital Omnibus on AI, as part of the broader Digital Package on Simplification. The proposal introduces targeted amendments to the EU Artificial Intelligence Act (AI Act), alongside changes to the General Data Protection Regulation and related legal acts, with the objective of modernizing and simplifying the regulatory framework, reducing administrative burdens for companies, lowering compliance costs and fostering innovation, in particular for SMEs and small mid‑caps.

The Digital Omnibus proposal seeks, among other things, to adjust the application timelines for certain high‑risk AI obligations, taking into account delays in the availability of harmonized standards and compliance support tools. In this context, the Commission proposed linking the application of selected high‑risk AI requirements to the availability of appropriate standards and guidance, while providing backstop dates to ensure legal certainty. Further proposed amendments aim to reinforce the powers of the EU AI Office and centralize oversight of general‑purpose AI systems, extend existing regulatory simplifications for SMEs and small mid‑caps (including simplified technical documentation), promote AI literacy, and broaden measures supporting compliance, including regulatory sandboxes and possibilities for real‑world testing.

At the same time, it is important to note that the EU AI Act itself has already been adopted and is in force, with its obligations applying progressively over time. The Digital Omnibus on AI constitutes a proposal to amend selected provisions of the AI Act and remains subject to the EU legislative process. Nevertheless, the AI Act is expected to have far‑reaching extraterritorial effects, in particular for non‑EU providers placing AI systems on the EU market or whose AI outputs are used within the EU.

As compliance with the AI Act is often more complex and costly once AI systems are already operational than during the development phase, firms are strongly advised to start preparing at an early stage. This includes conducting a structured AI Act readiness assessment, mapping affected AI use cases, and initiating proportionate adaptation measures to align governance, risk management and documentation with the evolving EU regulatory framework.
 

Embrace AI and elevate your business while managing the risks

In the financial sector in particular, AI has moved from experimentation to a strategic capability, fundamentally reshaping data analysis, decision‑making and customer engagement. By deploying AI and generative AI (GenAI), financial institutions can unlock significant value while navigating generative AI risks and benefits. On the one hand, GenAI enables deeper insights from large and complex datasets, improves forecasting and strengthens risk assessment. On the other hand, its advanced capabilities also enhance fraud detection, financial crime prevention and operational resilience. Combined, these developments provide a strong competitive edge in an increasingly fast‑moving and tightly regulated environment.

 

Beyond efficiency gains, AI enables scalable automation of routine and knowledge‑intensive tasks, freeing up skilled employees to focus on higher‑value activities such as innovation, client advisory and strategic growth. At the same time, the benefits of AI in the financial sector become evident in its ability to augment human capabilities, helping less experienced staff perform at a higher level and supporting teams more effectively overall. Yet these developments do not come without generative AI risks and challenges, making it essential for financial institutions to balance productivity gains with responsible AI oversight.

 

Despite its transformative potential, AI adoption comes with material risks, including model reliability, data quality, bias, transparency, third‑party dependencies and regulatory compliance. To address these challenges, financial institutions increasingly rely on structured approaches such as comprehensive Generative AI risk management frameworks, ensuring that risks are identified early and managed effectively. Realizing sustainable value therefore depends on embedding AI within a robust governance structure that supports generative AI risks and mitigation, secures accountability, maintains human oversight and aligns with supervisory expectations, including evolving FINMA AI compliance requirements. Organizations that balance innovation with disciplined risk management and implement trusted‑AI principles will be best positioned to improve efficiency, enhance customer trust and achieve long‑term competitive advantage in the digital economy.


Summary

While Switzerland lacks specific AI legislation, it effectively regulates AI through existing laws and closely monitors international trends. Various FINMA statements underscore the importance of managing AI-related risks in governance, reliability, transparency and non-discrimination. The financial sector stands to benefit significantly from AI, but must implement robust governance frameworks to mitigate risks and ensure ethical practices. By doing so, financial institutions can harness AI’s transformative potential while maintaining trust and compliance.

Acknowledgement

We thank Marwa Eid for her valuable contribution to this article.


FAQs


About this article

Authors


Related articles

The EU AI Act: What it means for your business

The EU regulation for artificial intelligence is coming. What does it mean for you and your business in Switzerland?

As Gen AI reshapes business, what will the legal landscape look like?

The rise of generative artificial intelligence has sparked debate around various legal, regulatory and compliance considerations.