Inclusive, people-centered AI
Confidence in AI as a sustainable force for good comes from demonstrating broad-based benefits for people. AI will have to live up to its promise of enhancing the human experience and of creating new jobs in a sustainable economy, and not widen existing gaps or replace humans in the workplace.
This was underscored by the recent G7 agreement on Guiding Principles and a Code of Conduct on Artificial Intelligence, which emphasized both the need to use AI to address our greatest challenges and mitigate societal, safety and security risks. US President Biden’s executive order (via EY.com US) also focused on principles to protect human values in AI.
Governments, business leaders and civil society must anticipate the transitions which will be accelerated by AI, understand the human impact, and ensure they’re championed in a just transition. From decarbonizing energy to enabling autonomous mobility, creating nature-based climate solutions, and automating low-value hospitality tasks, AI will have wide-ranging impacts for workers and communities dependent on incumbent systems. We must ensure that affected workers and communities have access to the new opportunities created by AI and the skills to secure them.
Concerns about bias are longstanding in AI but become more urgent with the explosion of generative AI, the most quickly adopted technology ever. LLMs generate new content probabilistically based on vast, encyclopedic training data, effectively holding a mirror to culture and society.
AI-generated images frequently expose our biases, accelerating stereotypes anchored in systemic inequities, for example, by creating images of lighter-skin men for the holders of high-paying jobs, and sometimes struggling to illustrate scenarios contrary to stereotypical perceptions even when prompted to do so.14 As GenAI becomes a growing part of creating and decision-making, we risk reinforcing existing inequities related to gender, ethnicity, age, income and other factors unless we put safeguards in place.
These challenges are complicated by confidence-sapping issues inherent in probabilistic models, such as hallucinations (returning “made up” results), the near impossibility of linking an output to specific training data, and the potential for emergent qualities (unpredictable new capabilities) in LLMs.
As we look to AI to solve our greatest challenges, we must ensure it is inclusive, not just in terms of access and skills, but also in the knowledge and insight represented in training data. For example, there is increasing recognition that the knowledge of indigenous communities, who protect most of Earth’s remaining biodiversity on their lands, have an important role to play in addressing our sustainability challenges. Yet, that knowledge is often oral or experiential, and not represented in any AI training data.15 As we continue to develop GenAI, it will be important to incorporate ancestral knowledge and other underrepresented perspectives into the models so that they reflect diverse thinking and sustainability values.
Ultimately, we must guard against a bias for systemic status quo in AI because achieving sustainable outcomes entails shifting mindsets, innovating business models and rethinking our fundamental systems.