The first step in minimizing the risks of AI is to promote awareness of them at the executive level as well as among the designers, architects and developers of the AI systems that the organization aims to deploy.
Then, the organization must commit to proactively designing trust into every facet of the AI system from day one. This trust should extend to the strategic purpose of the system, the integrity of data collection and management, the governance of model training and the rigor of techniques used to monitor system and algorithmic performance.
Adopting a set of core principles to guide AI-related design, decisions, investments and future innovations will help organizations cultivate the necessary confidence and discipline as these technologies evolve.
Remember, AI is constantly changing, both in how organizations use it AND how it evolves and learns once it is operating. That continuous innovation is exciting and will undoubtedly yield tremendous new capacities and impacts, but conventional governance principles are simply insufficient to cope with AI’s high stakes and its rapid pace of evolution. These twin challenges require a more rigorous approach to governing how organizations can harness AI for the best outcomes, now and in the future.
In our ongoing dialogues with clients, regulators and academia — as well as in our experience in developing early uses and risk assessments for AI initiatives — we have observed three core principles that can help guide AI innovation in a way that builds and sustains trust:
- Purposeful design: Design and build systems that purposefully integrate the right balance of robotic, intelligent and autonomous capabilities to advance well-defined business goals, mindful of context, constraints, readiness and risks.
- Agile governance: Track emergent issues across social, regulatory, reputational and ethical domains to inform processes that govern the integrity of a system, its uses, architecture and embedded components, data sourcing and management, model training and monitoring.
- Vigilant supervision: Continuously fine-tune, curate and monitor systems to achieve reliability in performance, identify and remediate bias, promote transparency and inclusiveness.
What makes these principles specific to AI? It’s the qualifiers in each one: purposeful, agile and vigilant. These characteristics address the unique facets of AI that can pose the greatest challenges.
For example, the use of AI in historically “human-only” areas is challenging the conventional design process. After all, the whole point of AI is to incorporate and, in effect, emulate a human decision framework, including considerations for laws, ethics, social norms and corporate values that humans apply (and trade off) all the time. These unique expectations demand that organizations adopt a more purposeful approach to design that will enable the advantages of AI’s autonomy while mitigating its risks.
Similarly, as the technologies and applications of AI are evolving at breakneck speed, governance must be sufficiently agile to keep pace with its expanding capabilities and potential impacts. And lastly, while all new innovations thrive with monitoring and supervision, the sheer stakes at play, plus the ongoing, dynamic “learning” nature of AI (which means it continues to change after it has been put in place) require more vigilance than organizations have typically adopted.
With these guiding principles at the core, the organization can then move purposefully to assess each AI project against a series of conditions or criteria. Evaluating each AI project against these conditions, which extend beyond those used for legacy technology, brings much-needed discipline to the process of considering the broader contexts and potential impacts of AI.
Assessing AI risks:
Let’s look at four conditions that you can use to assess the risk exposure of an AI initiative:
- Ethics — The AI system needs to comply with ethical and social norms, including corporate values. This includes the human behavior in designing, developing and operating AI, as well as the behavior of AI as a virtual agent. This condition, more than any other, introduces considerations that have historically not been mainstream for traditional technology, including moral behavior, respect, fairness, bias and transparency.
- Social responsibility — The potential societal impact of the AI system should be carefully considered, including its impact on the financial, physical and mental well-being of humans and our natural environment. For example, potential impacts might include workforce disruption, skills retraining, discrimination and environmental effects.
- Accountability and “explainability” — The AI system should have a clear line of accountability to an individual. Also, the AI operator should be able to explain the AI system’s decision framework and how it works. This is more than simply being transparent; this is about demonstrating a clear grasp of how AI will use and interpret data, what decisions it will make with it, how it may evolve and the consistency of its decisions across subgroups. Not only does this support compliance with laws, regulations and social norms, it also flags potential gaps in essential safeguards.
- Reliability — Of course, the AI system should be reliable and perform as intended. This involves testing the functionality and decision framework of the AI system to detect unintended outcomes, system degradation or operational shifts — not just during the initial training or modelling but also throughout its ongoing “learning” and evolution.
Taking the time to assess a proposed AI initiative against these criteria before proceeding can help flag potential deficiencies so you can mitigate potential risks before they arise.