It’s risk, but not as we know it
In this context, boards are not the only institutions struggling to keep up with the pace of AI. Regulators are being left behind too as new technologies outpace their ability to oversee them. While the rules they set – around data privacy and security, for example – must be complied with, regulators don’t offer boards much support when it comes to guiding the risk management agenda.
Best practice boards have a bias to action, and they know they need to move faster and be bolder on AI. Yet, not only are most board directors constrained by a knowledge gap when it comes to exploiting the benefits of AI, they are also effectively on their own governing the associated risks — risks which are both novel and substantial across every industry.
Evidence suggests that public trust in autonomous, intelligent and robotic systems is uncertain and easily damaged. Despite their transformative power, these systems are not fail-safe — they can malfunction, be corrupted or contain algorithmic human bias with potentially fatal consequences. It is up to the board therefore to make sure that trust is earned rather than destroyed.
“It may be new territory, but being bold and embracing innovations such as AI is critical,” said Sharon Sutherland, EY Global Center for Board Matters Leader and EY Global Markets Strategy and Operations Leader. “The role of the board is to help provide perspective, prevent negligence and ensure organizational longevity. Advocacy for AI is one of the key components to realize this.”
A new framework for sustaining trust in AI
To achieve this in a way that is authentic and long lasting, boards need to move beyond just managing risk in the age of AI, to sustaining trust. This mindset shift creates a new set of organizational principles. Trust should be thought of in a framework, which is applied not only to organizational systems but to all processes impacted by AI. When thinking about trust in a systems framework there are many dimensions: ethics, responsibility, accountability, transparency and ultimately the explain-ability of those underlying systems. Without taking a holistic approach, it’s very difficult to sustain trust over time.
“The potential of AI to transform our world is tremendous, but the risks are significant, complex and fast-evolving,” said Nicola Morini Bianzino, EY Global Chief Client Technology Officer. “Those who embed the principles of trust in AI from the start are better positioned to reap AI’s greatest rewards.”
With trust embedded, organizations can then start to fully realize the potential opportunities that AI will bring. These wider conversations are starting to happen. Boards are beginning to talk about the power of AI to make sense of the huge – and growing – volumes of data their businesses now generate. They’re looking into how AI can help achieve efficiencies in manufacturing and supply chains, automate routine back-office tasks, improve decision making and deliver a more personalized retail experience for customers.
“Customers are starting to expect AI whether they understand it or not, so if you’re not meeting that new expectation with customers, your experience and the service you’re providing will feel increasingly dated and uncompetitive,” said Keith Strier, EY Global and EY Americas Consulting Leader for AI.