Recent advances in artificial intelligence (AI) have captured the imagination of C-suite leaders down to junior staffers, with use cases as diverse as helping diagnose illnesses to planning your next vacation. But how do we know these answers are fair and unbiased?
For heavily regulated industries such as financial services, this question gains added importance. These services cut to the core of how all of us plot out our lives, fulfill our goals, and protect our loved ones, and the penalties and reputational damage for bias and discrimination can be steep.
When calculating individual risk scores, AI models can draw on a broad range of variables, such as socioeconomic and lifestyle factors, as well as external consumer data and information sources (ECDIS), otherwise known as alternative data, for greater accuracy. But AI often lacks transparency in how it teases out relationships between data points and certain demographics.
A report by Infopulse highlights the change AI is having within the insurance industry: 77% of leaders in the sector were at some stage of adopting AI across functions in 2024, a jump of 16 percentage points from the prior year.¹ But despite these advances, the prospect of unfair bias and discrimination looms, particularly in the form of unequal pricing and inadequate coverage. As a result, state regulators, such as those in Colorado and New York, are setting up new guardrails around how insurers use AI.
Missouri-based Reinsurance Group of America (RGA), a global reinsurance company that focuses on life and health solutions, leverages AI in its predictive models. The company’s models are built upon vast amounts of data from across the globe, which positions RGA to help carriers better assess risks.
Given the stakes, how could RGA improve confidence with its insurance clients that it has a robust methodology to test for unfair bias? They turned to an advisor with a proven methodology and track record on fairness honed over decades, Ernst & Young LLP (EY US).
EY US would enhance RGA’s approaches for testing the insurance models for compliance, informed by the risks of bias and discrimination, while remaining useful for business purposes. Working together, EY US and RGA established a new bar for responsible innovation.