How do you teach AI the value of trust?

Authors
Cathy Cobey

EY Global Trusted AI Consulting Leader

Thought leader in digital trust. Advocate of women in technology. Lover of good food. Film festival enthusiast. Avid learner. Parent.

Jeanne Boillet

EY Global Accounts Committee Assurance Lead

Innovation driver in audit services. Client-centric. Strong advocate for diversity and the advancement of women in business.

12 minute read 3 Sep. 2018

Show resources

  • How do you teach AI the value of trust (pdf)

    Download 443 KB

The transformative potential of AI is high — but so are its risks. Can embedding trust from the start help your company reap AI’s rewards?

This article is part of a collection of insights about digital trust.

As the use of artificial intelligence (AI) and machine learning proliferates, AI technologies are rapidly outpacing the organizational governance and controls that guide their use.

External regulators simply can’t keep up, and enterprises are grappling with increasing demands to demonstrate sound and transparent controls that can evolve as quickly as the technology does.

As has been proven time and again in recent high-profile catastrophes, there are serious operational risks of using AI without a robust governance and ethical framework around it. Data technologies and systems can malfunction, be deliberately or accidentally corrupted and even adopt human biases. These failures have profound ramifications for security, decision-making and credibility, and may lead to costly litigation, reputational damage, customer revolt, reduced profitability and regulatory scrutiny.

The need to build trust

Within the organization, leaders must have confidence that their AI systems are functioning reliably and accurately, and they need to be able to trust the data being used. Yet this remains an area of concern; in our recent survey, nearly half (48%) of the respondents cited a lack of confidence in the quality and trustworthiness of data as a challenge for enterprise-wide AI programs.1

Meanwhile, organizations also need to build trust with their external stakeholders. For example, customers, suppliers and partners need to have confidence in the AI operating within the organization. They want to know when they are interacting with AI, what kind of data it is using, and for what purpose. And they want assurances that the AI system will not collect, retain or disclose their confidential information without their explicit and informed consent. Those who doubt the purpose, integrity and security of these technologies will be reluctant — and may ultimately refuse — to share the data on which tomorrow’s innovation relies.

Regulators are also looking for AI to have a net positive impact on society, and they have begun to develop enforcement mechanisms for human protections, freedoms and overall well-being.

Ultimately, to be accepted by users — both internally and externally — AI systems must be understandable, meaning their decision framework can be explained and validated. They must also be resolutely secure, even in the face of ever-evolving threats.

Amid these considerations, it is increasingly clear that failure to adopt governance and ethical standards that foster trust in AI will limit organizations’ ability to harness the full potential of these exciting technologies to fuel future growth.

Without trust, AI cannot deliver on its potential value. New governance and controls geared to AI’s dynamic learning processes can help address risks and build trust in AI.
Cathy Cobey
EY Global Trusted AI Advisory Leader
(Chapter breaker)
1

Chapter 1

Embedding trust into every facet of AI

Principles designed to foster confidence

The first step in minimizing the risks of AI is to promote awareness of them at the executive level as well as among the designers, architects and developers of the AI systems that the organization aims to deploy.

Then, the organization must commit to proactively designing trust into every facet of the AI system from day one. This trust should extend to the strategic purpose of the system, the integrity of data collection and management, the governance of model training and the rigor of techniques used to monitor system and algorithmic performance.

Adopting a set of core principles to guide AI-related design, decisions, investments and future innovations will help organizations cultivate the necessary confidence and discipline as these technologies evolve.

Remember, AI is constantly changing, both in how organizations use it AND how it evolves and learns once it is operating. That continuous innovation is exciting and will undoubtedly yield tremendous new capacities and impacts, but conventional governance principles are simply insufficient to cope with AI’s high stakes and its rapid pace of evolution. These twin challenges require a more rigorous approach to governing how organizations can harness AI for the best outcomes, now and in the future.

In our ongoing dialogues with clients, regulators and academia — as well as in our experience in developing early uses and risk assessments for AI initiatives — we have observed three core principles that can help guide AI innovation in a way that builds and sustains trust:

  1. Purposeful design: Design and build systems that purposefully integrate the right balance of robotic, intelligent and autonomous capabilities to advance well-defined business goals, mindful of context, constraints, readiness and risks.
  2. Agile governance: Track emergent issues across social, regulatory, reputational and ethical domains to inform processes that govern the integrity of a system, its uses, architecture and embedded components, data sourcing and management, model training and monitoring.
  3. Vigilant supervision: Continuously fine-tune, curate and monitor systems to achieve reliability in performance, identify and remediate bias, promote transparency and inclusiveness.

What makes these principles specific to AI? It’s the qualifiers in each one: purposeful, agile and vigilant. These characteristics address the unique facets of AI that can pose the greatest challenges. 

For example, the use of AI in historically “human-only” areas is challenging the conventional design process. After all, the whole point of AI is to incorporate and, in effect, emulate a human decision framework, including considerations for laws, ethics, social norms and corporate values that humans apply (and trade off) all the time. These unique expectations demand that organizations adopt a more purposeful approach to design that will enable the advantages of AI’s autonomy while mitigating its risks.

Similarly, as the technologies and applications of AI are evolving at breakneck speed, governance must be sufficiently agile to keep pace with its expanding capabilities and potential impacts. And lastly, while all new innovations thrive with monitoring and supervision, the sheer stakes at play, plus the ongoing, dynamic “learning” nature of AI (which means it continues to change after it has been put in place) require more vigilance than organizations have typically adopted.

With these guiding principles at the core, the organization can then move purposefully to assess each AI project against a series of conditions or criteria. Evaluating each AI project against these conditions, which extend beyond those used for legacy technology, brings much-needed discipline to the process of considering the broader contexts and potential impacts of AI.

Assessing AI risks:

Let’s look at four conditions that you can use to assess the risk exposure of an AI initiative:

  1. Ethics — The AI system needs to comply with ethical and social norms, including corporate values. This includes the human behavior in designing, developing and operating AI, as well as the behavior of AI as a virtual agent. This condition, more than any other, introduces considerations that have historically not been mainstream for traditional technology, including moral behavior, respect, fairness, bias and transparency.
  2. Social responsibility — The potential societal impact of the AI system should be carefully considered, including its impact on the financial, physical and mental well-being of humans and our natural environment. For example, potential impacts might include workforce disruption, skills retraining, discrimination and environmental effects.
  3. Accountability and “explainability” — The AI system should have a clear line of accountability to an individual. Also, the AI operator should be able to explain the AI system’s decision framework and how it works. This is more than simply being transparent; this is about demonstrating a clear grasp of how AI will use and interpret data, what decisions it will make with it, how it may evolve and the consistency of its decisions across subgroups. Not only does this support compliance with laws, regulations and social norms, it also flags potential gaps in essential safeguards.
  4. Reliability — Of course, the AI system should be reliable and perform as intended. This involves testing the functionality and decision framework of the AI system to detect unintended outcomes, system degradation or operational shifts — not just during the initial training or modelling but also throughout its ongoing “learning” and evolution.

Taking the time to assess a proposed AI initiative against these criteria before proceeding can help flag potential deficiencies so you can mitigate potential risks before they arise.

Taking a holistic view of AI risks
(Chapter breaker)
2

Chapter 2

Taking a holistic view of AI risks

Understand risk to unlock attributes of trusted AI

Having met these conditions for AI confidence, the organization can now action the next layer of checks and balances.

To truly achieve and sustain trust in AI, an organization must understand, govern, fine-tune and protect all of the components embedded within and around the AI system. These components can include data sources, sensors, firmware, software, hardware, user interfaces, networks as well as human operators and users.

This holistic view requires a deeper understanding of the unique risks across the whole AI chain. We have developed a framework to help enterprises explore the risks that go beyond the underlying mathematics and algorithms of AI and extend to the systems in which AI is embedded.

Our unique “systems view” enables the organization to develop five key attributes of a trusted AI ecosystem:

  1. Transparency: From the outset, end users must know and understand when they are interacting with AI. They must be given appropriate notification and be provided with an opportunity to (a) select their level of interaction and (b) give (or refuse) informed consent for any data captured and used.
  2. “Explainability”: The concept of explainability is growing in influence and importance in the AI discipline. Simply put, it means the organization should be able to clearly explain the AI system; that is, the system shouldn’t outpace the ability of the humans to explain its training and learning methods, as well as the decision criteria it uses. These criteria should be documented and readily available for human operators to review, challenge and validate throughout the AI system as it continues to “learn.”
  3. Bias: Inherent biases in AI may be inadvertent, but they can be highly damaging both to AI outcomes and trust in the system. Biases may be rooted in the composition of the development team, or the data and training/learning methods, or elsewhere in the design and implementation process. These biases must be identified and addressed through the entire AI design chain.
  4. Resiliency: The data used by the AI system components and the algorithms themselves must be secured against the evolving threats of unauthorized access, corruption and attack.
  5. Performance: The AI’s outcomes should be aligned with stakeholder expectations and perform at a desired level of precision and consistency.

Those organizations that anchor their AI strategy and systems in these guiding principles and key attributes will be better positioned for success in their AI investments. Achieving this state of trusted AI takes not only a shift in mindset toward more purposeful AI design and governance, but also specific tactics designed to build that trust.

Leading tactics for managing risk and building trust
(Chapter breaker)
3

Chapter 3

Leading tactics for managing risk and building trust

Emerging AI governance practices

With the increasing impact AI is having on business operations, boards need to understand how AI technologies will impact their organization’s business strategy, culture, operating model and sector. They need to consider how their dashboards are changing and how they can evaluate the sufficiency of management’s governance over AI, including ethical, societal and functional impacts.
Jeanne Boillet
EY Global Accounts Committee Assurance Lead

To truly apply trusted AI principles, organizations need the right governance in place.

Let’s explore some of the leading tactics that we have observed with our clients to help build a trusted AI ecosystems:

AI ethics board — A multi-disciplinary advisory board, reporting to and/or governed by the board of directors can provide independent guidance on ethical considerations in AI development and capture perspectives that go beyond a purely technological focus. Advisors should be drawn from ethics, law, philosophy, privacy, regulations and science to provide a diversity of perspectives and insights on issues and impacts that may have been overlooked by the development team.

AI design standards — Design policies and standards for the development of AI, including a code of conduct and design principles, help define the AI governance and accountability mechanisms. They can also enable management to identify what is and is not acceptable in AI implementation. For example, these standards could help the organization define whether or not it will develop autonomous agents that could physically harm humans.

AI inventory and impact assessment — Conducting a regular inventory of all AI algorithms can reveal any orphan AI technologies being developed without appropriate oversight or governance. In turn, each algorithm in the inventory should be assessed to flag potential risks and evaluate the impact on different stakeholders.

Validation tools — Validation tools and techniques can help make certain that the algorithms are performing as intended and are producing accurate, fair and unbiased outcomes. These tools can also be used to track changes to the algorithm’s decision framework and should evolve as new data science techniques become available.  

Awareness training — Educating executives and AI developers on the potential legal and ethical considerations around AI and their responsibility to safeguard users’ rights, freedoms and interests is an important component of building trust in AI.

Independent audits — Regular independent AI ethical and design audits by a third party are valuable in testing and validating AI systems. Applying a range of assessment frameworks and testing methods, these audits assess the system against existing AI and technology policies and standards. They also evaluate the governance model and controls across the entire AI life cycle. Given that AI is still in its infancy, this rigorous approach to testing is critically important for safeguarding against unintended outcomes.

A foundation of trust to enable a confident future

As AI and its technologies continue to evolve at an astonishing rate — and as we find new and innovative uses for them — it is more important than ever for organizations to embed the principles and attributes of trust into their AI ecosystem from the very start.

Those who embrace leading practices in ethical design and governance will be better equipped to mitigate risks, safeguard against harmful outcomes and, most importantly, sustain the essential confidence that their stakeholders seek. Enabled by the advantages of trusted AI, these organizations will be better positioned to reap the potential rewards of this tremendously exciting, yet still largely uncharted journey.

What questions should leaders be asking?

  • How can my organization minimize risks in our AI journey while still enabling us to harness the full potential of these exciting new technologies? 
  • How can my organization use these technologies to augment human intelligence and unlock innovation? 
  • What steps can we take to build our AI strategy and systems on a foundation of trust and accountability?  

Summary

The potential of AI to transform our world is tremendous, but the risks are significant, complex and fast-evolving. Those who embed the principles of trust in AI from the start are better positioned to reap AI’s greatest rewards.

About this article

Authors
Cathy Cobey

EY Global Trusted AI Consulting Leader

Thought leader in digital trust. Advocate of women in technology. Lover of good food. Film festival enthusiast. Avid learner. Parent.

Jeanne Boillet

EY Global Accounts Committee Assurance Lead

Innovation driver in audit services. Client-centric. Strong advocate for diversity and the advancement of women in business.