This link suggests a symbiotic relationship. Companies that have moved further along the responsible AI journey are the ones seeing improvements in the areas that need the biggest boost, and it’s not hard to see why. Anxious employees may be reassured by a public commitment to responsible AI from their employer. Communicating a responsible approach can build brand reputation and customer loyalty — ultimately driving revenue growth. And robust governance can help prevent costly technical and ethical breaches, as well as reducing recruitment and retention costs — benefits that ultimately flow through to the bottom line and boost cost savings.
For business leaders, the message is clear — increase the return on your AI investments by moving further along the responsible AI journey.
The price tag of ignoring the risks
While responsible AI adoption drives benefits, the converse is also true: neglecting it can come at a steep cost. Almost every company in our survey (99%) reported financial losses from AI-related risks, and 64% experienced losses exceeding US$1 million. On average, the financial loss to companies that have experienced risks is conservatively estimated at US$4.4 million.1 That’s an estimated total loss of US$4.3 billion across the 975 respondents in our sample.
The most common risks organizations reported being negatively impacted by are non-compliance with AI regulations (57%), negative impacts to sustainability goals (55%) and bias in outputs (53%). Issues such as explainability, legal liability and reputational damage have so far been less prominent, but their significance is expected to grow as AI is deployed more visibly and at scale.
Encouragingly, responsible AI is already linked to fewer negative impacts. For example, those who have already defined a clear set of responsible AI principles have experienced 30% fewer risks compared to those who haven’t.
C-suite blind spots leave companies exposed
Despite the financial stakes, it’s clear that many C-suite leaders don’t know how to apply the right controls to mitigate AI risks. When asked to match the appropriate controls against five AI-related risks, only 12% of respondents got them all right.
As may be expected, CIOs and CTOs performed the best — yet even here only about a quarter answered correctly across all five use cases.
Chief AI Officers (CAIOs) and Chief Digital Officers (CDOs) fared only slightly better than average (15%) likely reflecting a background more grounded in data science, academia and model development rather than traditional technology risk management. Consequently, they may have less experience managing technology-related risks than their CIO and CTO counterparts.
Concerningly, CROs — the leaders who are ultimately responsible for AI risks — perform slightly below average, at 11%. And at the bottom end of the spectrum CMOs, COOs and CEOs performed worst (3%, 6% and 6% respectively).