4 minute read 17 Jul 2020
Person jumping seen through a camera viewfinder

Why unbiased AI is essential to building a better working world

By Ben Falk

Director, Global Insights, Research Institute | Global Markets – EY Knowledge

Dad. Gardener. Foodie. Wildlife lover. Social scientist and reformer of capitalism. Studying interactions between technology, markets, and public policy, focusing on AI, data, intangibles, and crypto.

4 minute read 17 Jul 2020

It’s crucial for the long-term development of AI that the technology is perceived as fair. Otherwise, trust in AI may be lost for a generation.

The COVID-19 crisis placed unprecedented strain on social contracts around the world. The pandemic’s disproportionate impact on minority and underprivileged communities, both in terms of health and economic costs, has exposed systemic inequalities along racial and ethnic lines in many societies.

Public anger is rightfully directed at the status quo, creating near-term risks for companies navigating this volatile social climate. Amid the global Black Lives Matter protests, businesses are being forced to reassess their social responsibilities, including ensuring the deployment of artificial intelligence (AI) technologies is fair and unbiased.

IBM recently announced that it will no longer offer facial recognition or surveillance technology due to concerns over bias, for example. Microsoft and others have placed a moratorium on facial recognition collaborations with law enforcement agencies. This is not only demonstrating ethical leadership, but also astute risk management.

Addressing the blind spot of algorithmic bias

Algorithmic bias refers to repeated and systematic errors in a computerized system that generates unfair outcomes, including favouring one group over another. Bias can derive from a range of factors, such as the design of the algorithm, the “training data” used as inputs, or via unanticipated applications, among others.

EY’s survey in conjunction with The Future Society “Bridging AI’s trust gaps” (pdf) completed in late 2019, just before the arrival of the novel coronavirus, asked policymakers and companies to prioritize ethical principles across a range of AI use cases. The results highlight the biggest divergence in ethical priorities between companies and policymakers across applications concerned the principle of fairness and avoiding bias. Further, we specifically asked companies and policymakers about two topical use cases, facial recognition and surveillance.

The chart below shows the ten biggest gaps in ethical priorities between policymakers and companies out of more than 100 measured in total across the survey. Four of the ten largest concern the principle of fairness and avoiding bias (highlighted in grey). The gap around bias for surveillance and facial recognition applications, particularly relevant in the post COVID-19 world, represent the fourth and fifth biggest in the entire data set.

The data indicates many companies do not appreciate the importance that policymakers and the public are placing on bias, and are failing to identify, assess, and manage risks arising from the use of potentially discriminatory algorithms. Consequently, companies may be developing products and services that will generate little traction in the market, as they are poorly aligned to emerging values, preferences, and regulatory guidance.

Biggest gaps across all use cases and principles chart

These risks require active mitigation beyond moratoria, and recent corporate actions suggest the gaps may be closing. Companies should consider going further by exposing both their models and governance frameworks to broader scrutiny, such as external independent auditors or even the general public. A close examination of model training data is also necessary, as bias can seep in inadvertently. For example, while excluding observations describing race or ethnicity might appear to eliminate the risk of bias, if the real world is segregated by post code, then one’s address can reveal sensitive characteristics, and generate biased outcomes despite good intentions.

EY Unbiased AI

Interested in learning more about how you can ensure your AI remains unbiased? Find out more about EY’s upcoming service offering.

 

Connect with us

Businesses must consider the unintended consequences for society, not just achieving technical accuracy. Pandemics historically are associated with racism, xenophobia, and class conflict. Haphazard deployment of discriminatory algorithms may worsen a tragic situation.

Key considerations for the development of unbiased AI

  • How do you design an AI system that is unbiased if our societies are systemically and institutionally biased?
  • What are the implications for our society if companies deploy biased algorithms into a world already rife with discrimination?
  • How can we ensure emerging technologies are not deployed unfairly against vulnerable groups?
  • What steps should companies take to minimize the risk of algorithmic bias?
  • What rights should a victim of algorithmic discrimination have to redress unfairness?

Summary

The COVID-19 pandemic’s disproportionate impact on minority and underprivileged communities, both in terms of health and economic costs, has exposed systemic inequalities along racial and ethnic lines in many societies. Businesses are being forced to reassess their social responsibilities, including ensuring the deployment of artificial intelligence (AI) technologies is fair and unbiased.

About this article

By Ben Falk

Director, Global Insights, Research Institute | Global Markets – EY Knowledge

Dad. Gardener. Foodie. Wildlife lover. Social scientist and reformer of capitalism. Studying interactions between technology, markets, and public policy, focusing on AI, data, intangibles, and crypto.