
Chapter 1
AI has huge potential to help create sustained value for all
Get AI right and it can build stakeholder trust. Get it wrong and the risks are existential.
So, what exactly is the link between AI and long-term value?
A responsible organization thinks and acts in the long-term interests of all its stakeholders — from employees and suppliers to regulators and local communities.
That means board members like you must engage with those stakeholders and consider their interests when making decisions. In doing so, you support your organization in creating sustained value for its stakeholders and earning their ongoing trust. That, in turn, makes it more attractive to consumers, investors and prospective employees.
AI has tremendous potential to help create this sustained long-term value — as long as you get it right. It can help you build trust by providing fair outcomes for everyone coming into contact with it, or for whom it makes decisions. It can also contribute to other important elements of a responsible business, such as inclusivity, sustainability and transparency.
But rapid digitalization — particularly the rise of customer analytics — has raised questions around discrimination and fairness that risk destroying that trust. In a world where inequalities are more evident than ever, an AI technology that proves biased against particular groups, or prevents certain outcomes, is very bad news. As such, the risks of getting AI wrong aren’t only serious — they can impact the future of an organization.
Related article
As the lifeblood of AI, data is key to building and keeping trust
Ethical AI isn’t just about preventing bias, though. It’s also about privacy. “The lifeblood of AI as it's currently developed is data,” says Reid Blackman Ph.D., Founder and CEO of Virtue Consultants. “Companies have to balance the need to collect and use that data with ensuring they have consumer trust – so consumers continue to feel comfortable sharing it.”
To do this, boards need to make sure their organizations are clear and transparent about their approach to data, so consumers can give informed consent.
This will create a virtuous circle: the more consumers trust you, the more data they’ll share with you. And the more data they share, the better your AI — and ultimately, your business outcomes.
On the flipside, organizations that don’t treat their data with care can create a vicious cycle. It can take only one mistake, or perceived mistake, for a user to stop trusting your organization. If this happens, they may share less data, making your AI less effective — or even desert you for good.
Boards have a vital role to play in protecting their organizations from this vicious cycle. As Reid says, “Boards of directors have a responsibility to ensure that the reputation of their brand is protected.”
How EY can help
Artificial intelligence consulting services
Our Consulting approach to the adoption of AI and intelligent automation is human-centered, pragmatic, outcomes-focused and ethical.
Read moreCompanies have to balance the need to collect and use data with ensuring they have consumer trust – so consumers continue to feel comfortable sharing it.
Boards need to be more constructive and contributory as AI strategies evolve
From day one of developing your AI strategy, it’s crucial you help management understand the risks and opportunities these technologies bring — and how ethics influence them both. That way, you can help them build the trust they need to embrace AI fully in the organization.
But to do that, you need to understand and trust AI yourself. And, as of today, the trust gap could be partly what’s holding organizations back.
According to John Thompson, Chairman of the Board at Microsoft, a lack of knowledge on the board is one reason for this. “There aren’t enough people that know the technology and understand its applicability, and therefore how it can be used in a meaningful way for the organization overall,” he says. “So making sure that the board is knowledgeable about the platform, and has a point of view, is a critical issue.’’
It’s worth bridging this trust gap in your organization to help prevent negative news stories from damaging your brand. It could also allow you to support your organization in deploying AI technologies that are human-centric, trustworthy and serve society as a whole.

Chapter 2
Regulators need to protect organizations as well as consumers
Global standards would give consumers and organizations the same protection around the world.
The right regulation will help boards by setting parameters that make sure organizations deploy AI in a trusted way. That means a way that’s safe and fair for the consumer, as well as good for business. As Eva Kaili, a member of the European Parliament and Chair of its Science and Technology Options Assessment body (STOA), puts it: “We want regulation to benefit citizens, not just maximize profits.”
But two factors make creating appropriate regulation a challenge. First, regulators need to walk a fine line between protecting consumers and giving organizations enough room to innovate; too strict, and innovation is stifled; too lax, and consumers are vulnerable to bias or privacy breaches.
Second, AI technologies cross borders. Recognizing this, in October 2020, the European Parliament became one of the first institutions to publish detailed proposals on how to regulate AI across its member states. It’s now rewriting its draft legislation in the light of the COVID-19 pandemic.
How EY can help
Government & Public Sector services
Our Government & Public Sector community brings together more than 20,000 EY people from over 100 countries. All of us share a passion to help governments work better for their people.
Read moreCollaboration and dialogue will be key to creating global standards
We applaud this effort. But as Eva says: “We cannot ignore that we have to apply global standards to get the maximum benefit of AI technology.”
For Eva, these standards would need to address business to consumer, as well as business to business. So, a one-size-fits-all approach wouldn’t work. Instead, she suggests, “Different business models will have to show how they respect privacy, how they respect fundamental rights and principles and how they manage to do that by default and embed it in their algorithms. They will need to follow principles that ensure that businesses or consumers that interact at an international level will have the same protection that they have in their own country.”
The principles would also flex to reflect varying levels of risk, as Eva explains. “The concept at this point is to ensure that we will have different approaches per sector — low risk and high risk.”
Related article
We cannot ignore that we have to apply global standards to get the maximum benefit of AI technology.
Collaboration between the private sector, governments and academia is central to making sure legislation reflects how companies are using AI. And that it balances risk with commercial reality. “We have to keep an open dialogue,” says Eva. “It’s very important, since our legislative proposals are relevant to what the market needs, to make sure we will be open to listen.”

Chapter 3
10 questions you can ask to help build and safeguard trust in AI
To mitigate AI risks, you’ll need to build your knowledge and strengthen governance and oversight around its use and application.
It’s clear that governing the use of AI can be complex and challenging, particularly for boards outside of the tech sector. But as John Thompson says: “I think it's unequivocal that AI will, in fact, be an important technology platform for every company around the world.”
That means you’ll need to make sure the AI technologies your organization designs and deploys are unbiased and safeguard the organization against exposure to associated risks. To do that, you’ll need to educate yourself to understand these technologies better, and strengthen the governance around their use.
I think it's unequivocal that AI will, in fact, be an important technology platform for every company around the world.
These questions should help you kickstart or re-evaluate the current process in your organization, so that trusted AI is established to create long-term value for all.
- How can we ensure our early involvement and continued commitment to our organization’s AI strategy?
- Do we understand our specific role in establishing objectives and principles for AI that help protect our organization against unintended consequences? Do we have adequate processes and procedures in place to react quickly in case of AI failures?
- Do we have the right set of skills to guide management in making the right decisions about AI, trust, ethics and risks? Can we bridge any gaps with internal training or do we need to consider external resources? What’s our plan for continuous upskilling and training?
- Have we made transparency and accountability a top priority when it comes to AI? If yes, where can our principles be found? If not, what’s our plan to address this?
- Are we confident the current governance structure is sufficient to effectively oversee our organization's use of AI, whether developed in house or acquired? Should we consider a special committee to provide enhanced governance and focus?
- Are we consulting a diverse group of stakeholders, including end users, to challenge and test the objectives we’ve set for our AI applications? Are we regularly communicating with the operating team to make sure these objectives align with how they’re developing and deploying AI?
- How do we ensure that our AI operations teams factor in compliance and risk management from the earliest stages of development?
- To what extent do we work with governments to understand AI regulatory developments in our sector, and make sure we’re aligned? Should we consider heightned engagement if our current levels are low or non-existent?
- How do we identify and learn from early adopters and regulation pioneers to aide our decision making on how best to use and govern these technologies within our organizations?
- Are we doing enough with AI technologies to move from remaining competitive to also ensuring that we create long-term value? If not, which of the issues identified here are standing in our way?
The views of third parties set out in this publication are not necessarily the views of the global EY organization or its member firms. Moreover, they should be seen in the context of the time they were made.
How EY can help
EY Trusted AI Platform
EY Trusted AI Platform provides an integrated approach to evaluate, quantify and monitor the impact and trustworthiness of artificial intelligence (AI).
Read moreRelated articles
Summary
As a board member, you can help your organization build and safeguard trust in AI among your stakeholders — but only if you apply AI technologies right. Use AI unethically and your organization could face existential risks.
The right regulation will support you by mandating the use of trusted AI, but global standards are challenging to develop. You can help protect your organization and its consumers by understanding the application of these technologies enough to oversee their ethical use.