Press release

5 Aug 2020

EY global study finds disconnect on ethical artificial intelligence (AI) priorities between public and private sector

Press contact
Ying Shan Ho

Media Relations (Consulting, Digital, Talent) and Social Media Lead, Ernst & Young Solutions LLP

Media relations and social media lead. Wide reader and TV viewer of anything from bots and business to Black Mirror. Visual arts and theater enthusiast.

  • Discrepancies exist between companies and policymakers on future of AI governance
  • Both groups show potential blind spots, posing risks

EY released findings from a global survey finding significant differences in how the public and private sectors view the future of ethics, governance, privacy, policy and regulation of artificial intelligence (AI) technologies. According to the Bridging AI’s trust gaps report, developed in collaboration with The Future Society, AI discrepancies exist in four key areas: fairness and avoiding bias, innovation, data access and privacy and data rights. 

In the survey of 71 policymakers and more than 280 global organizations, respondents ranked ethical principles by importance for 12 different AI use cases, and sentiment was measured around the risk and regulation of AI. 

Policymakers align around specific priorities, while private sector lacks consensus

Policymakers’ responses show widespread agreement on the ethical principles most relevant for different applications of AI. For example, on the use of AI for facial recognition policymakers rated “fairness and avoiding bias” and “privacy and data rights” as the two top concerns by a wide margin. 

Yet private sector priorities on the same question were relatively undifferentiated. In fact, the private sector responses across use cases and principles were more evenly distributed, with narrow margins defining the top choices. And the private sector’s top choices were principles prioritized by existing regulations, such as GDPR, rather than emerging issues such as fairness and non-discrimination.  

Singapore’s private sector has still yet to make ethical AI a priority. In the second edition of Singapore’s Model AI Governance Framework released in January 2020, it was mentioned that only 15 companies had adopted the framework. 

Cheang Wai Keat, Singapore Head of Consulting, Ernst & Young Advisory Pte. Ltd. shares how a lack of AI governance generates risks: 

“Significant misalignments around fairness and avoiding bias generate substantial market risks, as companies may be deploying products and services poorly aligned to emerging social values and regulatory guidance. However, companies that are able to establish trust in their AI-powered products and services will be at an advantage.”

Disagreement about the future direction of governance poses risks  

While both policymakers and companies agree a multi-stakeholder approach is needed to guide the direction of AI governance, results show disagreement on what form it will take: 38% of organizations surveyed expect the private sector to lead a multi-stakeholder framework, and only 6% of policymakers agree. This disconnect poses potential challenges for both groups in driving governance forward, and it also presents market and regulatory risks for companies developing AI products while governance approaches are still under discussion. 

Benjamin Chiang, EY Asean and Singapore Government & Public Sector Leader, Ernst & Young Advisory Pte. Ltd. acknowledges: 

“It will not be easy to bridge the disconnect between the public and private sector; however it must be a national imperative. Governments must make closing the AI trust gap a top priority, as this will allow the acceleration of digital transformation necessary to address the urgent public health, social and economic challenges before us.”

Overcoming differences through collaboration

The survey results found that each stakeholder group has blind spots when it comes to the implementation of ethical AI, with 69% of companies agreeing that regulators understand the complexities of AI technologies and business challenges, while 66% of policymakers disagreed. 

These findings suggest greater collaboration between both groups will be critical to overcome knowledge gaps. Policymakers should take a consultative and deliberate approach with input from the private sector, particularly on technical and business complexities for which policymakers lack expertise. Similarly, the private sector should work to reach consensus around AI governance principles, so the regulation requirements of both parties are taken into account. 

Nigel Duffy, EY Global Artificial Intelligence Leader, says:

“As AI transforms business and industries, poor alignment diminishes public trust in AI and slows the adoption of critical applications. For efforts to be fruitful, companies and policymakers need to be aligned. Coordination between both sets of stakeholders is critical to developing pragmatic policy and governance approaches that are informed by constraints and realities on the ground.”

Gil Forer, EY Global Markets Digital and Business Disruption Leader, says:

“As AI scales up in new applications, policymakers and companies must work together to mitigate new market and legal risks. Cross-collaboration will help these groups understand how emerging ethical principles will influence AI regulations and will aid policymakers in enacting decisions that are nuanced and realistic.” 

To access the Bridging AI’s trust gaps report, click here


Note to editors

About the report 

The EY web-based survey was conducted between 2019 and early 2020. It obtained responses from 71 policymakers and 284 companies across 55 countries. 

About EY

EY is a global leader in assurance, tax, strategy, transaction and consulting services. The insights and quality services we deliver help build trust and confidence in the capital markets and in economies the world over. We develop outstanding leaders who team to deliver on our promises to all of our stakeholders. In so doing, we play a critical role in building a better working world for our people, for our clients and for our communities.

EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation is available via For more information about our organization, please visit .

This news release has been issued by Ernst & Young Advisory Pte. Ltd, a member of the global EY organization.

About The Future Society 

The Future Society is an independent 501(c)(3) nonprofit think-and-do tank. Specializing in questions of impact and governance, their mission is to help advance the responsible adoption of AI and other emerging technologies for the benefit of humanity. The Future Society leverages a global, multidisciplinary network of experts, practitioners, and institutional partners and tackles a broad, but carefully selected, range of short-term and longer-term issues in AI governance.