4 minute read 14 Feb 2024
woman touching computer screen

How do you plan to create trust by design in artificial intelligence?

By Christian Schnewlin

Senior Manager, Business Consulting in Financial Services | EY Switzerland

Race car driving is the passion of Christian. Pushing towards the top rank with his team while knowing that consistency and sustainability is more important than short-term wins.

4 minute read 14 Feb 2024
Related topics AI Digital Assurance Finance

Organizations should consider fundamental attributes and ongoing assurance to meet their shared responsibility for trusted AI.

In brief 

  • Companies have a shared responsibility to ensure that AI meets technical, ethical and social criteria from development to operation.
  • Customers need to be able to trust the AI used by businesses, while regulators are making trust part of the law.
  • The EY Trusted AI Framework proposes seven attributes to address the unique risks of AI and build trust.

It’s easy to become weary of headlines that tell us the latest tech breakthrough will “change everything.” But generative AI and AI-driven large language models (LLMs) are set to live up to the hype, creating a new form of intelligence that may even surpass the creation of the PC in terms of impact. But how can we ensure that this new form of intelligence can be trusted? 

Top barriers

33%

Organizations suffering unclear AI governance and ethical frameworks

As AI accelerates, its ability to transform performance and productivity could translate into huge value in varied sectors, from banking and healthcare to consumer goods. However, for many businesses, the buzz around AI is yet to yield genuine breakthroughs. While organizations have adopted AI in piecemeal form or launched pilot projects, these important first steps are in reality a response to uncertainty. Now is the time to move from siloed projects to a cohesive and comprehensive strategic roadmap for transformation.

The challenge will be to define the organization’s AI strategy, the AI governance and to implement frameworks that really absorb and integrate this transformative change in as controlled and secure a way as possible. A cornerstone of this journey will be to maintain the organizations level of digital trust; getting this wrong could result in a loss of customers, market share and brand value. Conversely, those that get it right will be able to differentiate themselves from their competitors in the digital economy as they look to disrupt their business and enter new markets. But how can the organizations simultaneously transform its organization to such a large extent while maintaining the digital trust level?

Trusted AI framework

With the risks and impact of AI spanning technical, ethical and social domains, a new framework for identifying, measuring and responding to the risks of AI is needed to build and maintain digital trust. The EY Trusted AI Framework with seven attributes is built on the solid foundation of existing governance and control structures, but also introduces new mechanisms to address the unique risks of AI.

Untitled-1
  • Accountability

    There is unambiguous ownership over AI systems and their impacts across the AI development lifecycle

  • Fairness

    AI systems are designed with consideration for the need of all impacted stakeholders and to promote inclusiveness and positive societal impact.​

  • Reliability

    Outcomes of AI systems are aligned with stakeholder expectations and perform at a desired level of precision and consistency, whilst being secured from unauthorized access, corruption, and/or adversarial attack.​

  • Explainability

    Appropriate levels of explanation are enabled so that the decision criteria of AI systems can be reasonably understood, challenged, and/or validated by human operators. 

  • Transparency

    Appropriate levels of openness regarding the purpose, design, and impact of AI systems is provided so that end users and system designers can understand, evaluate, and correctly employ AI outputs.

  • Sustainability

    The design and deployment of AI systems are compatible with the goals of sustaining physical safety, social well-being, and planetary health.

As AI solutions will also be heavily used from external third parties, by design the shared responsibilities and the implication on the Responsible AI framework must be considered right from the beginning. Using external AI provider will require the organization to adequately identify the related risks and change the approach on how AI is governed, and digital trust maintained.

Assurance essential

With great opportunity comes great change, comes great risk, comes great responsibility. Maintaining digital trust throughout development, implementation and operation is essential for the success and speed of your adoption. We believe there are three fundamental assurance actions that leaders need to incorporate now:

  • Adopt digital trust by design

    Gain assurance that your organization is ready to absorb the technological change comprehensively, in its strategy, governance and especially framework. The impact of not assuring emerging technologies in advance will increase in line with the growing power and responsibility entrusted to them as they are embedded into safety-critical or decision-making systems.

  • Maintain the digital trust level through assurance

    Due to the many moving parts that must seamlessly align and support each other, there is a high risk of misalignment, inefficiency and, ultimately, ineffectiveness. Gain assurance right from the start of your AI journey to ensure that you are doing the right thing and doing it right. Assurance in AI transformation will increase the likeliness of a successful and timely transformation.

  • Crack through AI

    The complexity of AI LLM and the current state of research, market understanding, and AI adoption progress is not sufficient yet to fully trust AI. Mainly driven by the lack of transparency, AI solutions often are black boxes with no transparency and limited control. Inject assurance elements in your AI LLM model design by embedding data points that will later allow you to gain assurance on the activities of the AI. AI LLM model reviews are fundamental to get it right from the start.

If AI delivers on its potential, it could be every bit as transformative as the personal computer has been over the last five decades, supercharging productivity, unleashing innovation and spawning new business models — while disrupting those that don’t adapt quickly enough. The uncertainty and resource constraints confronting many companies are real, but there’s no need to let them become an excuse for inaction and delay. 

Summary

Navigating trust in AI involves addressing the risks associated with its rapid growth. Regulatory developments like the EU AI Act play an important role in building trust but organizations must also take a proactive approach and acknowledge AI as a shared responsibility.

Acknowledgements

We kindly thank Emre Beyazgül and Gian Luca Kaiser for their valuable contribution to this article.

About this article

By Christian Schnewlin

Senior Manager, Business Consulting in Financial Services | EY Switzerland

Race car driving is the passion of Christian. Pushing towards the top rank with his team while knowing that consistency and sustainability is more important than short-term wins.

Related topics AI Digital Assurance Finance