How can risk foresight lead to AI insight?

By Cathy Cobey

EY Global Trusted AI Consulting Leader

Thought leader in digital trust. Advocate of women in technology. Lover of good food. Film festival enthusiast. Avid learner. Parent.

6 minute read 20 Sep 2019

Show resources

  • How can risk foresight lead to AI insight? (pdf)

    Download 337 KB

When it can take only one mistake — or a perception of a mistake — for a user to stop trusting AI, how can you earn and sustain user trust?

Every challenge in business is an opportunity for AI. However, organizations are holding back in leveraging these opportunities because of mistrust in AI — and so, being cautiously selective in where it is used.

Trust is the foundation on which organizations can build stakeholder confidence and active participation with their AI systems. However, in this era of instantly accessible information, mistakes can be costly, and second chances are harder to come by. Organizations who want to succeed in an AI world must embed a risk-optimization mindset across the AI lifecycle. They do this by elevating risk from a mere responsive function to a powerful, dynamic and future-facing enabler for building trust.

Building trust in AI

AI is introducing new risks and impacts that have historically been the purview of human decision-making, not technology development. 

With the risks and impacts of AI spanning across technical, ethical and social domains, a new framework for identifying, measuring and responding to the risks of AI is needed; one that is built on the solid foundation of existing governance and control structures, but also introduces new mechanisms to address the unique risks of AI.

Risks of AI

Managing the risks

Managing the risks of AI is about more than preventing reputational, legal and regulatory impacts. It's also about being considered trustworthy. With public discourse on AI heavily skewed to its risks, it will take time and active dialogue with stakeholders to build trust in AI systems. 

Building trust in AI will take a coordinated approach. EY team believes there are five pillars of trust:

  1. Advocacy – Do stakeholders understand the benefits of AI and how it will enhance the products and services they receive? 
  2. Proficiency – Does AI enhance and improve an organization's brand, product, service and stakeholder experience?
  3. Consistency – Is the AI use in alignment with an organization's stated purpose and support its achievement over time?
  4. Openness – Has the organization effectively communicated and engaged with its core stakeholder groups on its use of AI and the potential benefits and risks?
  5. Integrity – Is the organization’s approach to the design and operation of trusted AI in line with the expectations of its stakeholders?

In establishing the five pillars of trust, the overarching element that connects them all is accountability.

Accountability is the foundation on which trust is built and is the inflection point at which an organization translates intentions into behaviors. Regardless of the level of autonomy for an AI system, ultimate responsibility and accountability for an algorithm needs to reside with a person or organization. By embedding risk management into its design enablers and monitoring mechanisms for AI, organizations can demonstrate their commitment to accountability and for being held to account for AI systems predictions, decisions and behaviors.

Leading AI organizations are building Trust by Design into AI systems from the outset to help organizations move from 'what could go wrong?' to 'what has to go right?'
Amy M. Brachio
EY Global Deputy Vice Chair, Sustainability

With understanding still evolving on how AI operates and when and how risks could develop, many AI systems are considered high risk by default and approached with caution. To counteract this response, various tools and platforms are being developed to help organizations quantify the impact and trustworthiness of their AI systems.

Quantifying the risks of AI

If AI is to reach its full potential, organizations need the ability to predict and measure conditions that amplify risks and undermine trust.

Understanding the drivers of risk in relation to AI requires consideration across a wide spectrum of contributing factors including its technical design, stakeholder impact and control maturity. Each one of these, in their design and operation, can affect the risk level of an AI system. Developing an understanding of the risk drivers for an AI system is a complex undertaking. It requires careful consideration of potential stakeholder impacts across the full lifecycle of the AI system.    

In developing a trusted AI platform, there are three important components to managing the risks of an AI system:

  • Technical risk — It evaluates the underlying technologies, technical operating environment and level of autonomy.
  • Stakeholder impact — It considers the goals and objectives of the AI agent and the financial, emotional and physical impact to external and internal users, as well as reputational, regulatory and legal risk.
  • Control effectiveness — It considers the existence and operating effectiveness of controls to mitigate the risks of AI.

Together, these provide an integrated approach to evaluate, quantify and monitor the impact and trustworthiness of AI. A trusted AI platform uses interactive, web-based schematic and assessment tools to build the risk profile of an AI system, and then an advanced analytical model to convert the user responses to a composite score comprising technical risk, stakeholder impact and control effectiveness of an AI system. 

This kind of platform can be leveraged by organizations to develop a risk quantification during a robust desk-top design and challenge function at the beginning of their AI project. Embedding trust requirements in the design of AI systems from the outset will result in more efficient AI training and higher user trust and adoption.

Responding to the risks of AI

Responding to the risks of AI will require the use of new, innovative control practices that can keep pace with AI’s fast-paced adaptive learning techniques. 

In developing a risk mitigation strategy it's important for an organization to use an integrated approach which considers the objectives of the AI system, the potential impacts to stakeholders (both positive and negative), the technical feasibility and maturity of control mechanisms and the risk tolerance of the AI operator.

With AI, which can continue to learn and adapt its decision framework after it’s put into production, it’s important that strong monitoring mechanisms are in place to establish trust. Organizations need to be able to continually evaluate whether an AI system is operating within acceptable performance levels and identify when a new risk is forming.


Building and maintaining trust in AI requires investments in innovative risk management techniques and going beyond an understanding of the potential risks to develop a deeper risk measurement system.

Leveraging trust in AI as a competitive advantage

AI has already begun to disrupt the way that we work and live. Organizations that will thrive in an AI world will be those that can optimize both data and trust feedback loops to attract more users and accelerate their adoption of AI.

By acting in good faith, developing a robust AI risk management system and involving users in their AI journey, organizations will go a long way in establishing user trust as a competitive differentiator and translating risk foresight into AI insight.


Organizations must put trust at the heart of their AI systems and leverage risk foresight to accelerate their access to AI insights. Advanced AI tools can assist an organization in their journey by providing insights on the sources and drivers of risk and guiding an AI design team in developing targeted risk mitigation strategies.

About this article

By Cathy Cobey

EY Global Trusted AI Consulting Leader

Thought leader in digital trust. Advocate of women in technology. Lover of good food. Film festival enthusiast. Avid learner. Parent.