9 minute read 28 Sep 2022
AI Goverance

AI acting ethically in tomorrow’s world depends on how we shape its governance today

By Cathy Cobey

EY Global Trusted AI Consulting Leader

Thought leader in digital trust. Advocate of women in technology. Lover of good food. Film festival enthusiast. Avid learner. Parent.

9 minute read 28 Sep 2022

AI tech is not inherently biased or manipulative, it’s all about how we train and use it…

In brief:

  • Cathy Cobey, EY’s Global Trusted AI Leader will be speaking at the World Summit Artificial Intelligence (World Summit AI) in Amsterdam on October 12 and 13.
  • She elaborates some important viewpoints regarding the implications and chances that AI brings.
  • This includes ethical values, human centricity, AI governance and scalability.

Despite the concerns surrounding artificial intelligence, investments in the technology are growing at an accelerated rate based on the expectation that AI will improve the human condition. There are already many examples where AI has greatly enhanced the customer experience and had a material impact on corporate profits. And the potential societal impact is even more impressive. AI has the capability to fast-track the achievement of the United Nations Sustainable Development Goals. AI can be deployed to address illness and disease. AI can be implemented to broadly improve wellbeing for the majority, particularly those in developing countries. However, despite all the wins with AI, there have also been many disappointing and discriminatory outcomes. One thing is clear: decisions we take today in terms of ethics and governance will be crucial in shaping the impact of artificial intelligence on society in 20 years’ time.

The words are those of Cathy Cobey, EY’s Global Trusted AI Leader. As a keynote speaker at the World Summit Artificial Intelligence (World Summit AI) in Amsterdam-Zaandam on October 12/13, Ms Cobey is an acknowledged expert in the ethical and control implications of AI. On the eve of the conference, we asked her for her perspective on a handful of defining themes related to the societal implications of technology transformation.

1. Ethical AI reinforces our fundamental values

“AI is still in the infancy of its technological journey. Our job right now is to set the ethical and safety rails that AI should operate within, both today and into the future. We need to ensure that AI adheres to well-considered ethical guidelines such as privacy, individual rights and the increasingly complex area of non-manipulation,” according to Ms Cobey. “AI is trained on historical data. Data that incorporates biases and outdated value systems of yesterday. We need to leverage the power of our massive data stores, while preventing historical discrimination being replicated through AI’s decision framework. With proper design and training, artificial intelligence can reverse historical inequality, but only if we consciously build equality into the objectives set for the AI system.

“My point is that – ideally – AI can be a powerful vehicle for good. We should harness AI today to open up education to students all around the world building for each child a customized learning plan. We should embrace AI to meet multiple societal objectives such as driving energy efficiency, while improving individual safety. AI can easily find the optimal schedule for street lighting to conserve energy and decrease aggressive behavior in entertainment areas. We should programme AI to save lives by pre-empting imminent catastrophe and assist our leaders in conducting sophisticated scenario planning, not only to draft but also to implement more timely and effective emergency response plans. AI has proven that there is still a lot of untapped value in our current data stores. EY sponsored a global modeling competition to demonstrate how frog counting is an excellent indicator of an ecosystem’s health. Weather data is a particularly useful data set being used by large investment companies to more accurately predict storm impacts on commodity prices, and by small scale farmers to time their crop planting and harvesting. I truly believe that AI can have a huge positive impact on society, but as great as all those benefits are, it is a technology that can also be used to cause a lot of harm.”

Ethical and control implications of AI will be the deciding factor as to whether the legacy of AI in the future is a dystopia or utopia

Ms Cobey, who has been helping institutions to better understand and manage business risks related to technology throughout her 29 years with EY, is convinced that “AI can be an enabler towards improving the human condition but, as it builds its own decision framework, ethical and equitable objectives need to be built into its design” – not mitigated for afterwards or accepted as the way it’s always been done. “It’s a no-brainer that the technology will become increasingly available and the onus is on us to grasp it with wisdom, prudence and respect. This is no easy task and will require the joint effort of robust multidisciplinary teams.”

Despite the complexity, it is the diversity of dedicated designers – in terms of gender, age, values and lineage – that will be the backbone of human-technology interaction, working together to improve the quality of people’s everyday lives. “A society that is in some important ways undesirable can – with the help of AI – be reshaped and transformed into a better version of itself.”

2. Human centricity: how do you fit technology to its user?

The million-dollar question is how to make positive human-technology interaction happen in a qualitative, cost-effective fashion. “People need to interact with technology and technology needs to understand the full spectrum of its user,” says Ms Cobey, when asked to consider aspects like cognition, perception and motivation. “Human-technology interaction focuses on technology from a user’s perspective. How should we assess the viability of new technological developments in relation to human constraints and capabilities? Ultimately, we want AI to help improve the human experience. We must, however, bear the following in mind; initiatives that may be beneficial to the majority may well be detrimental to the minority. Many AI models are built to determine the optimal outcome. But there is a design flaw when the historical data it is trained on, would teach AI that the most hired candidate in the past was a white male. Or that husbands, on average, were lent more money than their wives. There is a well-discussed example where a husband and wife – with the same input variables – were given vastly difference lending limits. Considering beyond the individual impact, what impact could there be on society if AI, rather than eradicating historical biases, magnifies them?”

AI is a probability-based technology. AI will provide the best answer based on the information it has, but it may not be the right answer

Leveraging her background as a CPA and technology risk practitioner, Ms Cobey has sought and found answers to fundamental AI questions. “How can AI incorporate human values? This is potentially the most complex issue we need to address, because there are currently many, many user cases for algorithms across almost every situation. We are starting to rely on AI to make decisions affecting our work, education, health, financial and mental well-being. How can humans trust algorithms – and how can we rebuild trust when they fail? We have all seen vivid examples of innocent AI chatbots designed for human engagement being corrupted by deliberate human manipulation within hours, requiring designers to rectify situations they never thought would arise. So, the question is not if algorithms will incidentally fail, but when, and are we prepared for that irregularity and do we know how to handle it? It’s all about striking the right balance between protecting human rights and maximizing technological potential in the transformative age we live in.” Ms Cobey’s point is unequivocal: “AI must work as intended! The key is defining more holistically what we intend it to do.”

3. Governance is key to AI adoption

Although artificial intelligence is still very much in its infancy, it is clear that to make AI work as intended there needs to be a governance, legal and regulatory framework in place. This oversight framework is key to ensuring that machine-learning technologies are not only well trained and monitored, but also built with objectives that span across both functional and ethical considerations. “The focus must be on helping humanity chart and navigate the adoption of AI systems across multiple dimensions,” explains Ms Cobey. “AI operates in a broader ecosystem and is directly impacted by a number of global dialogues. Consider digital identities, the sharing of intellectual property, accountability even. As trustees and custodians of AI, we must build an ecosystem that has the right building blocks and builds trust in users. This will involve robust risk-based governance and control structures and leveraging independent validation checks on the AI conceptual design, data sources, decision framework and training / monitoring. We cannot afford AI to polarize myopic views or magnify societal inequality because oversight and monitoring measures lag behind. Ensuring the safeguarding of standards is what it’s all about. We not only need ethics that can stand the test of time, but also good governance before AI can develop exponentially.”

4. Scalability: no one size fits all for companies great and small

The current challenge in AI is that there have been many pilots but few sustainable, established AI programs. When can AI facilitate the replication or simulation of human intelligence in machines, there is no limit to where it can be used. The challenge is deciding where it can add the most value. Scaling it across an organization is a significant undertaking. “Call it a North Star project if you like because the end game, after all, is that AI magnifies and strengthens the human intellect. How do you leverage a technology that can be used everywhere and has the potential of replacing its keepers?” asks Ms Cobey. “Developing, improving, adapting and combining AI methods by standardizing processes and activities is the way to scale and control AI. This needs to happen to create or apply systems that behave intelligently in hospitals, research laboratories, government bodies and throughout the financial community.” She pauses briefly, choosing her words carefully. “It is very important to remember in our oversight role that AI will always provide an answer. The challenge is that it will provide the best answer based on the information it has, but that does not mean that it is the right answer.”

AI must work as intended, so positive human-technology interaction must strike the right balance

These game-changing AI systems emerge from machine learning and include a diverse range of technologies -- conversational bots, self-driving cars, complex prediction models and comprehensive optimization algorithms. They have so much potential, Ms Cobey enthuses. “Their feasibility depends on access to high-quality data as well as alignment with stakeholder and societal expectations. The end result is that the challenge confronting us today is to sidestep the swamps that slow us down, but at the same time turning available data into actionable knowledge that contributes to building a better working world.”

Cathy Cobey joined EY in Canada in 1993 and has held management positions in the organization’s Audit and Business Consulting service lines. Throughout her three decades with the global leader in assurance, tax, transaction and advisory services, Ms Cobey has been deeply involved with EY’s Climate Change & Sustainability and Technology Risk practices. She currently serves on technical advisory committees with the Responsible AI Institute, CIO Strategy Council, CPA Canada and the Institute of Electrical and Electronics Engineers to develop industry and regulatory standards for emerging technology. Ms Cobey and her husband live and work in Toronto, Canada.

Cathy Cobey, EY’s Global Trust AI leader:

“The corporates will design, but consumers will decide. The onus is on us.”

EY at World Summit Artificial Intelligence

The world’s leading and largest AI summit gathers the global AI ecosystem of Enterprise, Big Tech, Startups, Investors and Science, the brightest brains in AI as speakers every October in Amsterdam to tackle head-on the most burning AI issues and set the global AI agenda.

Get your tickets

Summary

If we want AI to be acting ethically in tomorrow’s world we need shape its governance today. This means we need to reshape and transform our society, our holistically defining of AI and embrace scalability challenges.

About this article

By Cathy Cobey

EY Global Trusted AI Consulting Leader

Thought leader in digital trust. Advocate of women in technology. Lover of good food. Film festival enthusiast. Avid learner. Parent.