Case Study

How a global biopharma became a leader in ethical AI

EY teams used the global responsible AI framework to help a biopharma optimize AI governance, mitigate AI risks and protect stakeholders.

The better the question

How can you make today's AI ready for tomorrow's regulations?

Gaining insight into the review process and effectively improving AI risk management was a challenge.

1

Artificial intelligence (AI) has emerged as a key agent of transformation, creating opportunities for enhanced insight and efficiency in all industries. Biopharma is no exception.

Our client, a global biopharmaceutical company, had previously conducted an internal AI assessment to evaluate the maturity of its processes in this emerging field, which helped identify a number of gaps. The most pressing was the absence of an AI governance framework.

The firm understood that plugging this gap would be a vital step in reaching a mature state sooner rather than later, enabling it to simultaneously harness the opportunities of AI and successfully mitigate risks spanning technical, social and ethical domains — including decision-making bias and privacy violations.

Our client subsequently developed a comprehensive AI governance framework embracing responsible AI principles like transparency, fairness and human-centricity.

“We have a collective responsibility to manage AI risk. Our AI ethics principles are an important part of our AI risk management strategy and help maximize the benefits of AI,” explains the biopharma company’s Chief Information Officer.

However, the leadership required assurance that it was moving in the right direction and went in search of an independent partner.

EY teams worked with the client on a review of its AI governance, with an eye to supporting the business in maintaining its organizational values without impeding innovation. “Because we’re largely value-driven, we wanted to make sure both people and machines live those values,” says the biopharma company’s AI Governance Lead. 

Because we’re largely value-driven, we wanted to make sure both people and machines live those values.

When the biopharma started its journey a few years ago, AI governance was largely uncharted territory in the sector, offering few examples to follow.

What’s more, at the time, the European Union (EU) was poised to unveil its much-anticipated draft AI regulations. Once enacted, the resulting EU Artificial Intelligence (AI) Act — the world’s first comprehensive AI regulation — will place responsibility for governance of the technology at board level.

Various other leading nations and international organizations have also been developing their own regulatory approaches around AI, with which the biopharma may have to comply.  

Ultimately, the client needed to know how to make its AI of today ready for the regulation of tomorrow. 

Team of researchers inspecting charts on interactive screens

The better the answer

A detailed independent review of AI governance

EY teams collaborated with the biopharma to review its approach to AI ethics using the global Responsible AI framework.

2

Part of a broader suite of EY tools, techniques and enablers designed to assist in the responsible development and use of AI, the global responsible AI framework is a flexible set of guiding principles and practical actions.

Multi-disciplinary EY teams consisting of digital ethicists, IT risk practitioners, data scientists and subject-matter resources harnessed the global responsible AI framework to evaluate the biopharma’s responsible AI principles, as well as how these had been rolled out and understood across the business.

We overlaid the global responsible AI framework on the template that the client had already created, interviewing key stakeholders and reviewing relevant documentation.

“We invested time in understanding the client’s environment, and our experience in AI governance meant we were also able to ask the right questions at the right time,” says the EY UKI Client Technology & Innovation Officer, Catriona Campbell. 

We assessed how successfully the business had mitigated the risks of AI throughout its lifecycle, from problem identification through to modelling, deployment and ongoing monitoring.

To determine if the client had developed and implemented AI in line with its responsible AI principles, we also evaluated a sample of key AI projects, including forecasting, adverse event tracking and early disease detection.

Our review found that the biopharma was not always managing project-specific AI risks in line with its responsible AI principles. “The EY audit highlighted a number of gaps in our approach, allowing us to set minimum requirements for business teams working with AI, which we’re already working toward,” says the biopharma company’s AI Governance Lead.

The EY audit highlighted a number of gaps in our approach, allowing us to set minimum requirements for business teams working with AI, which we’re already working towards.
Two scientists standing in laboratory, looking into microscopes

The better the world works

Driving responsible AI to reduce risk to stakeholders

EY teams helped the biopharma safeguard stakeholders, including the public, from AI ethical issues.

3

The client was an early adopter of AI risk management in its industry, but we provided confidence in its approach and highlighted opportunities for improvement.

“Partnering with EY provided external validation in our approach and gave us valuable insight into areas where we need additional focus,” explains the biopharma company’s Chief Information Officer.

Our detailed review helped the biopharma appreciate the need for major changes in its approach to AI governance. These included improved third-party AI risk assessment and a new central AI inventory — the latter of which is foundational to AI risk management and enablement of regulatory compliance.

The firm realized that no one-size-fits-all method of AI governance exists, which makes the challenge all the greater for some businesses. For example, more federated companies with distributed autonomy must find a way to achieve consistency across multiple units without a single authority to police AI governance, while those with more centralized control will have to think differently.

It also became clear that, if an independent review finds an organization’s AI governance to be unfit for purpose, the leadership should be willing to make necessary structural changes or put in place a governance board to create better alignment.

It is essential that such a review is as customizable as a firm’s organizational structure, leadership and accountabilities are varied. “EY teams could work with us on how a responsible AI assessment should look, meshing what we were doing with their global responsible AI framework and working their magic to join the dots,” says the biopharma company’s Head of AI Research & Development.

EY teams could work with us on how a responsible AI assessment should look, meshing what we were doing with their global responsible AI framework and working their magic to join the dots.

The ethics of AI is still very much in its youth, so it makes sense that many companies lack the in-house capabilities required to start or continue their journey.

Support from an independent partner with the capacity to tailor the assessment process adds value by helping an organization develop AI governance processes appropriate for its business — increasing the likelihood of regulatory compliance.

This will help position the leadership to protect stakeholders, including the public, from the risks of AI — keeping humans at the center of transformation. 


AI

Learn how EY is delivering value in business transformation using AI

Contact us

Like what you’ve seen? Get in touch to learn more.