EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients.
How EY can help
-
AI boosts business but presents challenges. A Responsible AI framework allows leaders to harness its transformative potential while mitigating risks.
Read more
The challenge of misinterpreting bias
For organizations to address AI bias, they have to recognize what is and is not actual bias leaking into AI systems. Some responses or results may be problematic, rather than biased, because the historical, quantitative data the AI system used to formulate its response lacks full or proper context. AI outcomes may accurately mirror societal realities rather than indicate bias.
Consider an AI system designed to evaluate loan applications. If historical data indicates that certain applicants have a higher likelihood of defaulting on loans due to various economic factors, the AI may reflect this trend in its approval predictions. This outcome does not necessarily imply bias; it may simply represent the existing patterns in financial behavior and risk assessment based on the data available. In this case, the AI’s predictions could accurately mirror the realities of lending practices rather than exhibit unfair treatment.
Similarly, in the health care industry, an AI system used for predicting patient outcomes may show that certain demographic groups have higher rates of specific health conditions based on historical data. For example, if a particular community has a higher prevalence of diabetes due to genetic or socioeconomic factors, the AI may predict higher risks for individuals from that community. This prediction does not inherently indicate bias; rather, it reflects the actual health trends observed in the population, allowing health care providers to allocate resources and interventions more effectively.
The challenge arises when real-world distributions — the natural variations and inequalities present in society — are misinterpreted as bias. Stakeholders may mistakenly label AI outcomes as biased without considering the underlying societal context. This misinterpretation can lead to unnecessary changes in AI models or data collection practices, diverting attention from addressing the root causes of the disparities.
It is crucial to conduct thorough analyses to determine whether observed differences in AI outcomes stem from bias or reflect real-world distributions. This involves examining the data, understanding the context and considering the broader societal factors at play.
Another important aspect to consider is the need for equal representation in training data. For instance, facial recognition technology should be trained on diverse data sets that include individuals with light skin, medium skin and dark skin, as well as various ages, genders, ethnic backgrounds and physical characteristics. This helps the AI system to perform more accurately across all demographics.By understanding the nuances of this distinction, organizations can develop more effective strategies for identifying and mitigating bias while acknowledging the complexities of societal distributions. This balanced approach contributes to the creation of fairer AI systems that reflect the world they serve.