6 minute read 28 Jul 2023
Illustrations of people going to parties at night

The EU AI Act: What it means for your business

Authors
Madan Sathe

Partner, Forensic & Integrity Services | EY Switzerland

I am fighting against financial crime, fraud, and non-compliance by using analytics and innovation. Helping financial services clients to be more efficient in their digital transformation.

Karl Ruloff

Director | Forensic & Integrity Services | EY Switzerland

I am an experienced data science leader with more than 20 years of experience, combining data and technology with regulatory and business understanding to create value for my clients.

6 minute read 28 Jul 2023

The EU regulation for artificial intelligence is coming. What does it mean for you and your business in Switzerland?

In brief

  • The EU AI Act brings strict requirements, also for organizations which have not had to deal with model management until now.
  • As a first step, organizations should gain an overview,  build a repository of all models and implement a model management.
  • Even though regulation is not final, it is clearly on the horizon and time should be used to prepare.

Artificial Intelligence (AI) is transforming our world in unprecedented ways. From personalized healthcare to self-driving cars and virtual assistants, AI is becoming ubiquitous in our daily lives. However, this growing use of AI has raised many concerns about its impact on fundamental rights and freedoms. In response to this, the European Union (EU) has taken a significant step to regulate AI.

The EU AI Act, also known as the Artificial Intelligence Act, is the world's first concrete initiative for regulating AI. It aims to turn Europe into a global hub for trustworthy AI by laying down harmonized rules governing the development, marketing, and use of AI in the EU. The AI Act aims to ensure that AI systems in the EU are safe and respect fundamental rights and values. Moreover, its objectives are to foster investment and innovation in AI, enhance governance and enforcement, and encourage a single EU market for AI.

Who is affected?

The AI Act has set out clear definitions for the different actors involved in AI: providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI models will be held accountable. Moreover, the AI Act also applies to providers and users of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU.

What is required?

Step 1: Model inventory – understanding the current state

To understand the implications of the EU AI Act, companies should first assess if they have AI models in use and in development or are about to procure such models from third-party providers and list the identified AI models in a model repository. Many financial services organizations can utilize existing model repositories and the surrounding model governance and add AI as an additional topic.

Organizations which have not needed a model repository so far should start with a status quo assessment to understand their (potential) exposure. Even if AI is not used at present, it is very likely that this will change in the coming years. An initial identification can start from an existing software catalogue or, if this is not available, with surveys sent to the various business units. 

Step 2: Risk classification of models

Based on the model repository, the AI models can be classified by risk. The EU AI Act distinguishes different risk categories:

The Act lays out examples of models posing an unacceptable risk. Models falling into this category are prohibited. Examples include the use of real-time remote biometric identification in public spaces or social scoring systems, as well as the use of subliminal influencing techniques which exploit vulnerabilities of specific groups.

High-risk models are permitted but must comply with multiple requirements and undergo a conformity assessment. This assessment needs to be completed before the model is released on the market. Those models are also required to be registered in an EU database which shall be set up. Operating high-risk AI models requires an appropriate risk management system, logging capabilities and human oversight respectively ownership. There shall be proper data governance applied to the data used for training, testing and validation as well as controls assuring the cyber security, robustness and fairness of the model.

Examples of high-risk systems are models related to the operation of critical infrastructure, systems used in hiring processes or employee ratings, credit scoring systems, automated insurance claims processing or setting of risk premiums for customers.

The remaining models are considered limited or minimal risk. For those, transparency is required, i.e., a user must be informed that what they are interacting with is generated by AI. Examples include chat bots or deep fakes which are not considered high risk but for which it is mandatory that users know about AI being behind it.

For all operators of AI models, the implementation of a Code of Conduct around ethical AI is recommended.

Step 3: Prepare and get ready

If you are a provider, user, importer, distributor or affected person of AI systems, you need to ensure that your AI practices are in line with these new regulations. To start the process of fully complying with the AI Act, you should initiate the following steps: (1) assess the risks associated with your AI systems, (2) raise awareness, (3) design ethical systems, (4) assign responsibility, (5) stay up-to-date, and (6) establish a formal governance. By taking proactive steps now, you can avoid potential significant sanctions for your organization upon the Act coming into force.

Please note that this article refers to an ongoing legislative process which might lead to changes of the requirements.

What are the penalties in case of non-compliance?

The penalties for non-compliance with the AI Act are significant and can have a severe impact on the provider’s or deployer's business. They range from €10 million to €40 million or 2% to 7% of the global annual turnover, depending on the severity of the infringement. Hence, it is essential for stakeholders to make sure they understand the AI Act fully and comply with its provisions.

How is the financial services sector impacted by the Act?

Financial services have been identified as one of the sectors where AI could have the most significant impact. The EU AI Act contains a three-tier risk classification model that categorizes AI systems based on the level of risk they pose to fundamental rights and user safety. The financial sector uses a multitude of models and data-driven processes which will come to rely more on AI in the future. It is expected that those processes and models which are used for creditworthiness assessments or the evaluation of risk premiums of customers fall into the high-risk category. In addition, models used in operating and maintaining financial infrastructure considered to be critical will fall under the high-risk classification as well as AI systems used for biometric identification and categorization of natural persons or employment and employee management. So far, not included in the scope of the risk classification are, amongst others, AI systems purely used to improve customer experience, systems to detect fraud, customer lifetime value predictions and pattern analysis (without directly affecting decisions on individual customers). 

Illustrations of people going to parties at night

EY EU AI Act brochure

Download the PDF to get an overview of the EU AI Act and its impact on the markets. 

Download PDFb

Summary

The EU AI Act is set to be a significant milestone in the field of AI regulation and innovation. To ensure that the benefits of AI are fully realized while protecting fundamental rights and user safety, it is important for organizations to act now, assess their risks, and start preparing for the changes that the AI Act will bring. By doing so, organizations can move towards a more secure and trustworthy AI environment which will allow them to reap the rewards of this transformative technology.

Acknowledgement:

We kindly thank Konrad Schwenke and Ava Dossi for their contribution to this article.

About this article

Authors
Madan Sathe

Partner, Forensic & Integrity Services | EY Switzerland

I am fighting against financial crime, fraud, and non-compliance by using analytics and innovation. Helping financial services clients to be more efficient in their digital transformation.

Karl Ruloff

Director | Forensic & Integrity Services | EY Switzerland

I am an experienced data science leader with more than 20 years of experience, combining data and technology with regulatory and business understanding to create value for my clients.