The growing relevance of AI is raising new governance and oversight issues for senior insurance leaders: ethics and trust, implementing organizational changes, navigating evolving regulatory frameworks, and effective board oversight.
Trust in AI has multiple dimensions, including freedom from bias, reliability transparency, and explicability. One challenge is that AI is often held to a higher standard than human beings. One participant pointed out, “The human brain is the most opaque algorithm you can find. We are applying standards to algorithms that we don’t apply to human brains.”
Getting the talent right
AI is shifting human capital demands. It’s particularly difficult to find those who can bridge the gap between business needs and technological capabilities. “You need new skill sets and business translators who can connect, who understand both the art of the possible and your business and your domain,” said one participant. Others acknowledged that insurers will need to reckon with the impact of automation — resulting from the deployment of AI, ML and other technologies — on the workforce.
Several participants noted the increased need for third-party relationships. Many insurers are partnering with start-ups or technology providers to obtain the necessary capabilities, but insurers face challenges in identifying which start-ups are developing genuinely valuable solutions.
Getting data issues right
Data cleaning, maintenance, and engineering are crucial enablers. “Insurers haven’t invested much in core platforms for 30 years, so they don’t have confidence in the quality of their data,” said one participant. Some question how much data is needed and how to deal with it.
AI also raises the stakes on cybersecurity. Data can be stolen, or as one participant noted, “adversaries can poison data sets.” The process, called adversarial ML, involves injecting statistical noise or false information into a system’s training data to affect outcomes.
Navigating the regulatory landscape
Widely shared regulatory frameworks for AI have yet to emerge, and there is concern that regulation will not keep up with the pace of change. Regulations differ worldwide. In the US, states are beginning to pass legislation affecting AI and ML, including privacy laws, increasing the complexity of the regulatory environment and leading the corporate community to push for uniform federal privacy legislation. In April 2019, the European Commission released its Ethics Guidelines for Trustworthy AI.
Participants emphasized regulators’ concern with potential bias and harm to consumers. Some suggested that regulators will focus on outcomes rather than prior evidence of bias in the models. For regulators no less than for corporations, changing technology requires new skills and the ability to attract the right talent to carry out supervisory responsibilities in the context of rapid change. One participant asked, “What does it mean to be a supervisor in that new world?”
Developing effective board oversight
One participant asked “At the board level, how do we take directional responsibility for where we want this to go?”. Board members, supervisors, and other stakeholders are working to improve their understanding and oversight of these technologies and are in the early stages of building governance structures and developing the necessary skills and expertise for AI and ML.
As with other highly technical areas, boards are exploring how best to develop the necessary expertise and skills to govern and oversee AI.