8 minute read 23 Aug 2022
Picture of Women working together with laptop

How AI is transforming governance and risk management in insurance

By Rasmus Kolding

Manager, Financial Services Consulting, EY Denmark

Regulatory compliance professional. Passionate about the governance of businesses in the financial services sector. Adept at connecting the legal and business verticals of businesses.

8 minute read 23 Aug 2022

The use of AI models in insurance underpins insurers' need to diligently review their model portfolio and risk and governance frameworks.

In brief
  • Insurers need to react to the new risks emerging from the increasing use of AI models in the insurance sector.
  • New EU regulation sets significantly higher standards for model governance with potential fees higher than that for GDPR breaches.
  • Now is a good time for insurers to review their risk and governance frameworks for data and models.

Not more than a decade ago, most statistical and mathematical models used in the insurance sector were relatively simple, required less computational power, were easy to explain and understand the outcome of. However, over the last decade, the insurance sector started to use more advanced modeling tools. These advanced modeling tools sometimes referred to as AI agents, black box methods, or in more technical terms called machine learning models, often resulted in better prediction performance. But the high performance often came with the cost of low model explainability and low transparency to users.

In the insurance sector we see AI being used for pricing, claims, chatbots, automatic email replies, fraud detection, speech recognition and a lot more. A lot of the development is still in its early phases, but many insurers have deployed their first AI models and are using them on customer data. This may seem like a small step for a sector that already has wide use of statistical models, but it introduces several new risks.

How AI differs from other models used by insurers

Statistical models are developed to find relationships between variables. They do so by imposing an explicit mathematical expression for the relationships. Using statistical models, parameters are estimated from an underlying dataset. Methods exist to validate the outcome of the model. Because statistical models have a high degree of explicit modeling of relationships between observed variables, these models are typically easier to interpret and do not require as much data as more advanced AI models do.

AI models are developed for making high-accuracy predictions and adapting to “training data” without any imposed structure. This means that while AI models do not have an interpretation, a large amount of data is needed, and validation is difficult. Because of the enhanced complexity of AI models, they come with different risks compared to the statistical models that are traditionally used in the insurance sector. Some of the main risks related to the advanced AI models are:

  • Black box problems. The models are often difficult to interpret, and it can be difficult to explain the outcome and results as the model’s transparency is often unclear. This may lead to problems in relation to customers. For example, a claim may be rejected without the customer being able to understand the reasoning behind the rejection. Further, the inability to understand the rationale behind the models will negatively impact the insurance providers as well.
  • Sensitive information. Data may be anonymized or pseudonymized but using a lot of anonymized data may reveal sensitive information if put together. This is a GDPR risk but also a cyber risk if hackers gain access to anonymized or pseudonymized data. While this is an issue with statistical models as well, this challenge is more pronounced in AI models that tackle comparatively more data points, making it riskier.
  • Bias. Training data that does not completely reflect the customer cohort that the AI model is applied to, may introduce bias in predictions or outcomes. In addition, a lot of the AI models in production are so-called supervised models, and these models learn from human-labeled data. If the labels that the AI model is trained on are biased, there is a high risk that the bias in the labels will be amplified when feeding it to an AI model.
  • Instability. AI models may become unstable over time due to the retraining of new data, changes in the population being predicted, data errors, etc. As the AI models are largely black box models, this may go unnoticed for a long time unless explicit routines are in place to monitor the models.
  • Increased GDPR risk. AI models require large amounts of data, which increases GDPR risk. As data typically come from many sources it may be difficult to ensure that the data processor has permission from the data subject to use the data for the modeling in question. See also the risk of sensitive information.

Because AI models have these different characteristics, we recommend all insurers who deploy AI models review their model, data governance and risk frameworks.

We are seeing companies taking up GDPR issues again, because they have found potential holes in their current system set-up and because the use of data is becoming more widespread in the organization. We expect this to continue as a result of AI risks and the proposed Artificial Intelligence Act (AIA), and it would often make sense to review GDPR handling within the broader governance and risk frameworks for data and model risks.

New EU AI regulation

Considering the more widespread use of AI and the many possible applications of AI, the European Commission developed an AI Strategy in April 2021 – see Box 1.

Part of the strategy is a proposal for the regulation of AI, which intends to address some of the issues and risks mentioned above. The purpose of the proposed regulation of AI, the Artificial Intelligence Act (AIA) is to:

  • Ensure that AI systems placed and used in the EU are safe and respect fundamental human rights and EU values.
  • Ensure legal certainty to facilitate investment and innovation in AI.
  • Enhance governance and enforcement of existing law on fundamental human rights and safety requirements.

The draft AIA explicitly mentions all machine learning, logic- and knowledge-based and statistical approaches. It is not limited to machine learning models but may apply to all models used by insurers such as simple regression models and decision tree solutions.

In March 2022, the European Parliament Committee on Legal Affairs (JURI) proposed changes to the original draft. One suggestion was to remove “logic- and knowledge-based and statistical approaches.” While this would limit the scope of AIA, it still leaves some models in a grey area. An example of a model in the grey area would be the Bayesian machine learning model, as this is often referred to as a statistical model.

On the other hand, one could argue that this model falls in the category of machine learning models. We believe the deciding factor would ultimately depend on whether the model would pose a threat to citizens’ rights or safety. Such pricing models used in the insurance sector may well be included, irrespective of whether statistical models or machine learning models are used. This would be in line with the draft AIA’s view on credit scoring, which is considered in scope and high-risk.

Box 1. EU’s Artificial Intelligence Act

AIA process so far

  • 21 April 2021 – Proposal for the AIA (Artificial Intelligence Act)
  • Public hearing ended on 22 June 2021
  • 11 April 2022 – presentation of the initial draft report by IMCO and LIBE committees
  • 18 May 2022 – deadline for amendments
  • 11 July 2022 – opinions to be received from the other Committees

What’s next

  • October 2022 – vote in each respective Committee
  • November 2022 – plenary vote
  • December 2022 – start of the trialogues

Will apply to all EU Member States

Once the AIA is finally approved, it will apply directly in the Member States. Changes can occur as it is still a proposal and not yet approved by the Council and the European Parliament.

Once adopted, the AIA will apply in the member states after 24 months.

Fines up to 6% of revenue

If the use of AI is not properly managed, the potential fines are of up to 6% of revenue.

Classification of risks and how it could apply to insurers

The AIA draft proposes a risk-based approach considering four tiers of risks: unacceptable, high, limited and minimal as shown in Figure 1. The main focus is on high-risk systems which will be subject to extensive technical, monitoring and compliance obligations.

Nordic-FSO-Graphics_AI portfolio review

Figure 1. Proposed classification of risks in EU’s draft Artificial Intelligence Act

High-risk systems cover a wide range of systems used in the private and public sectors. In November 2021 the Council of the European Union proposed an amendment to the AIA. The amendment stated that high-risk systems should also include insurers’ use of systems for premium setting, underwritings and claims assessments. This stance is further elaborated by the Council in the same proposal:

AI systems are also increasingly used in insurance for premium setting, underwriting and claims assessment which, if not duly designed, developed and used, can lead to serious consequences for people’s life, including financial exclusion and discrimination.

The consequences of the Council’s proposal, if passed, will be significant for insurers since it will encompass not only new AI models but also existing models used in pricing, underwriting and claims.

How to best prepare for the new AI regulation

The AIA has not been passed yet and it may still be a few years until it takes full effect. However, as mentioned in figure 1, the change in risk from AI models is real and can have large consequences for insurers. Reputational risk is high if the data is used improperly, the model has a bias or decisions are made automatically, which may put the customer at an unlawful or unethical disadvantage.

For this reason and since compliance with AIA may require significant work involving many stakeholders, we recommend reviewing the risk and governance frameworks for data and models already.

Insurance companies should consider making an inventory of systems with applied AI models and for each model assess the risks and define key risk indicators, that will allow the company to track, prioritize and control related risks.

Examples of the questions that should be asked during the life cycle of an AI model can be seen in Figure 2.

Nordic-FSO-Graphics_Continued risk analysis.jpg

Figure 2. Examples of some of the questions that should be asked during the life cycle of an AI model.

From a larger perspective we recommend assessing AI models on three different levels:

  • Business objective. Is the AI/ML agent properly designed and operated as a tool to support its intended business process or decision?
  • Governance. Is the AI agent properly governed and managed?
  • Operations and controls. Is the AI agent well specified, implemented correctly and performing as expected?

Conclusion

The use of AI models will change most insurers’ risk profile significantly and mandates new governance and controls. Furthermore, the proposed EU regulation of AI models applies to not just AI models, but all machine learning, logic- and knowledge-based and statistical approaches.

We are seeing companies taking up GDPR issues again, because they have found potential holes in their current system setup and because the use of data is becoming more widespread in the organization. We expect this to continue as a result of AI risks and the proposed AIA, and it would often make sense to review GDPR handling within the broader governance and risk frameworks for data and model risks.

Summary

EY teams have experience in helping clients in the insurance sector with AI governance frameworks and AI data strategies, discovery and mapping. You can leverage the extensive experience of EY teams to discuss AI issues, new regulations, etc., and bring the right mix of experts to the table to answer pressing questions.

About this article

By Rasmus Kolding

Manager, Financial Services Consulting, EY Denmark

Regulatory compliance professional. Passionate about the governance of businesses in the financial services sector. Adept at connecting the legal and business verticals of businesses.