Leveraging her background as a CPA and technology risk practitioner, Ms Cobey has sought and found answers to fundamental AI questions. “How can AI incorporate human values? This is potentially the most complex issue we need to address, because there are currently many, many user cases for algorithms across almost every situation. We are starting to rely on AI to make decisions affecting our work, education, health, financial and mental well-being. How can humans trust algorithms – and how can we rebuild trust when they fail? We have all seen vivid examples of innocent AI chatbots designed for human engagement being corrupted by deliberate human manipulation within hours, requiring designers to rectify situations they never thought would arise. So, the question is not if algorithms will incidentally fail, but when, and are we prepared for that irregularity and do we know how to handle it? It’s all about striking the right balance between protecting human rights and maximizing technological potential in the transformative age we live in.” Ms Cobey’s point is unequivocal: “AI must work as intended! The key is defining more holistically what we intend it to do.”
3. Governance is key to AI adoption
Although artificial intelligence is still very much in its infancy, it is clear that to make AI work as intended there needs to be a governance, legal and regulatory framework in place. This oversight framework is key to ensuring that machine-learning technologies are not only well trained and monitored, but also built with objectives that span across both functional and ethical considerations. “The focus must be on helping humanity chart and navigate the adoption of AI systems across multiple dimensions,” explains Ms Cobey. “AI operates in a broader ecosystem and is directly impacted by a number of global dialogues. Consider digital identities, the sharing of intellectual property, accountability even. As trustees and custodians of AI, we must build an ecosystem that has the right building blocks and builds trust in users. This will involve robust risk-based governance and control structures and leveraging independent validation checks on the AI conceptual design, data sources, decision framework and training / monitoring. We cannot afford AI to polarize myopic views or magnify societal inequality because oversight and monitoring measures lag behind. Ensuring the safeguarding of standards is what it’s all about. We not only need ethics that can stand the test of time, but also good governance before AI can develop exponentially.”
4. Scalability: no one size fits all for companies great and small
The current challenge in AI is that there have been many pilots but few sustainable, established AI programs. When can AI facilitate the replication or simulation of human intelligence in machines, there is no limit to where it can be used. The challenge is deciding where it can add the most value. Scaling it across an organization is a significant undertaking. “Call it a North Star project if you like because the end game, after all, is that AI magnifies and strengthens the human intellect. How do you leverage a technology that can be used everywhere and has the potential of replacing its keepers?” asks Ms Cobey. “Developing, improving, adapting and combining AI methods by standardizing processes and activities is the way to scale and control AI. This needs to happen to create or apply systems that behave intelligently in hospitals, research laboratories, government bodies and throughout the financial community.” She pauses briefly, choosing her words carefully. “It is very important to remember in our oversight role that AI will always provide an answer. The challenge is that it will provide the best answer based on the information it has, but that does not mean that it is the right answer.”