Addressing the blind spot of algorithmic bias
Algorithmic bias refers to repeated and systematic errors in a computerized system that generates unfair outcomes, including favouring one group over another. Bias can derive from a range of factors, such as the design of the algorithm, the “training data” used as inputs, or via unanticipated applications, among others.
EY’s survey in conjunction with The Future Society “Bridging AI’s trust gaps” (pdf) completed in late 2019, just before the arrival of the novel coronavirus, asked policymakers and companies to prioritize ethical principles across a range of AI use cases. The results highlight the biggest divergence in ethical priorities between companies and policymakers across applications concerned the principle of fairness and avoiding bias. Further, we specifically asked companies and policymakers about two topical use cases, facial recognition and surveillance.
The chart below shows the ten biggest gaps in ethical priorities between policymakers and companies out of more than 100 measured in total across the survey. Four of the ten largest concern the principle of fairness and avoiding bias (highlighted in grey). The gap around bias for surveillance and facial recognition applications, particularly relevant in the post COVID-19 world, represent the fourth and fifth biggest in the entire data set.
The data indicates many companies do not appreciate the importance that policymakers and the public are placing on bias, and are failing to identify, assess, and manage risks arising from the use of potentially discriminatory algorithms. Consequently, companies may be developing products and services that will generate little traction in the market, as they are poorly aligned to emerging values, preferences, and regulatory guidance.