3 minute read 25 Sep 2020
Man with illuminated fidget spinner in dark room

Why closing the AI trust gap requires a public-private dialogue

By Katie Kummer

EY Global Deputy Vice Chair – Public Policy

Three decades leading and coaching diverse teams. Helping shape EY public policy goals. Mother to twin girls. Sports enthusiast. Movie buff. Strong proponent of workplace neurodiversity.

3 minute read 25 Sep 2020
Related topics Public policy AI

The disconnect on AI between policymakers and companies diminishes trust and slows the adoption of critical applications.

In brief
  • The COVID-19 pandemic is accelerating the momentum for policymakers and companies to find ways to address ethics and trust in AI.
  • Dialogue is important to prevent policy fragmentation and realize the full benefits and potential of AI. 

With use cases as varied as virtual assistants and law enforcement surveillance, artificial intelligence (AI) is being rapidly deployed, with the potential to change our lives dramatically. It helps recruit us for job openings, approve or deny our loan requests, and even drive our cars — and in the process, it raises urgent questions about ethics, bias and consumer protection.

The COVID-19 pandemic is accelerating the turn toward AI, through contact-tracing algorithms and more, while adding greater complexity to discussions about issues such as privacy, security and fairness. As such, the momentum is behind policymakers and executives to find ways to address ethics and trust in AI. Against this backdrop, an urgent and timely EY survey shows the need for active dialogue between policymakers and the private sector to understand and align interests. Otherwise, as noted in the survey, such misalignment creates new risks, particularly for companies in developing products or providing services that are not aligned with the regulatory environment in which they operate.

Implementing ai ethical principles across countries graphic

One fault line: the degree to which policymakers and companies expect AI ethical principles to vary across countries/regions. Among companies, 55% foresee one set of universal principles adopted by a majority of countries, a view shared by only 21% of policymakers. Instead, 61% of policymakers — compared with just 10% of companies — predict that one set of universal standards will emerge, but they will allow for different implementation across countries. In other words, those who are closer to policy decisions see greater complexity and ambiguity around compliance.

The two sides are also mirror images of each other in terms of their beliefs on the technical and business details of AI applications, beyond just the ethical concerns that they present. For example, two-thirds of policymakers agreed that regulators don’t understand the complexities of AI technologies and business challenges. As one example of the mismatch in viewpoints, 79% of policymakers disagree with the statement “It is not possible to mandate AI explainability, since it is very difficult to explain to the average person,” compared with 16% of companies. A key point the survey raises is that “While policymakers understand the ‘big picture’ ethical concerns raised by AI applications...they might not be as immersed in technical and business details as companies” – illustrating the need for increased dialogue between those establishing AI frameworks and standards and those that will be required to comply.

Policymakers are less immersed graphic

These contrasting gaps also extend toward the amount of trust that each side has in the other. For instance, far fewer policymakers (44%) than companies (72%) agree with the statement “Companies use AI to benefit consumers and society.” Companies are skeptical about the intents of policymakers as well, as evidenced in the fact that approximately 60% of companies agree that “self-regulation by industry is better than government regulation of AI” — while about the same rate of policymakers hold the opposite view.

These results from our survey underscore the importance of dialogue, so that each side better understands the need for policy frameworks that don’t create deep fragmentation across geographies and that don’t impair growth and the benefits that AI can offer. It’s time for a consultative and deliberative approach, with input from the private sector, ultimately to boost trust and confidence — and to benefit society as a whole.

Summary

As the use of AI accelerates around the world, related ethical issues such as privacy, security and fairness grow more complex. Policymakers and companies must find ways to align in addressing ethics and trust in AI.

About this article

By Katie Kummer

EY Global Deputy Vice Chair – Public Policy

Three decades leading and coaching diverse teams. Helping shape EY public policy goals. Mother to twin girls. Sports enthusiast. Movie buff. Strong proponent of workplace neurodiversity.

Related topics Public policy AI