A trusted framework for AI and ADM in government
If AI and ADM are to fulfil their potential in government, we need to urgently consider how to build a trusted framework that allows for its adoption at scale, mitigates the risk of misuse and still encourages innovation within the public and private sector. We believe that this framework should include three key elements:
1. Focus on human outcomes
Technology should always serve people, not the other way around. AI/ADM have the potential to power better, more cost-effective services, achieve policy objectives and assist staff to make better, more informed decisions – but only if citizens and public sector teams trust its ability to have a positive impact on people’s lives and society more broadly.
A fundamental rethink of how digital services are developed is required. Instead of talking about the “end users” of AI/ADM-powered digital services, putting humans at the centre of their design and deployment can help governments achieve people-focused outcomes, and mitigate the risk of harm. This will also require working closely across government departments, and with the private sector, universities and non-profits to understand what citizens really want from government services, and then considering how harnessing technology can deliver this in an inclusive and equitable way.
2. Clear risk-based regulation and governance
Much of the mistrust in AI/ADM stem from a lack of clarity around its permitted use. The fast growth of this technology has left regulators scrambling to keep up. Many countries initially hoped for self-regulation or chose to rely on existing legislation, regulations and case law as potential vehicles to govern AI use or at least protect individuals who might be subjected to its outcomes. But while elements of existing laws may touch on some of the risks arising where AI or ADM is deployed, they often will not cover AI/ADM-specific outcomes.
A consistent national regulatory framework can build trust in AI/ADM, and deliver the clarity required to accelerate investment and innovation, both within the public sector and industry. As outlined in the EY-Trilateral Research report, A survey of artificial intelligence risk assessment methodologies – the global state of play, while we currently see a diversity of approaches to AI governance around the world, more jurisdictions, including the EU, are moving to risk-based regulation.
In a risk-based approach, the burden of compliance is proportionate to the risk posed by the technology. It’s a way of balancing the need to mitigate potential misuse while still encouraging the innovation that will unlock more benefits for the public sector and citizens. It should include clear guidance on how AI risks will be assessed so that organizations can determine whether their intended application could potentially be considered high risk, and invest time and resources accordingly.
Governance is at least as important as legislation. The establishment of a designated regulatory agency – appropriately funded and staffed – is an important step in increasing market confidence in AI and ADM adoption. While some have argued for the establishment of non-regulatory, “centres of excellence” to guide the use of AI/ADM, a central body with the authority to introduce binding market regulation would significantly increase certainty in the market for AI/ADM adoption and innovation.
3. Assurance that instills confidence
Assurance is all about building confidence and trust. It goes hand in hand with correctly operationalising regulatory obligations and validating technology-driven outcomes. Just like auditing in other industries, a robust AI assurance framework that checks and verifies systems and processes and allows for decisions to be traced and explained (and challenged if necessary) can build trust in AI, guard against bias in models and provide the level of confidence needed to broaden its use.
Several countries are considering how to develop AI assurance frameworks. AI assurance is a major priority for the UK government, as outlined in the UK’s National AI Strategy which sets out an ambition to be “the most trusted and pro-innovation system for AI governance in the world”. The UK has developed an AI Assurance Roadmap, which includes recommendations around developing common standards and techniques, building a dedicated AI assurance profession and improving links between industry and researchers.
Australia is well placed to take a leadership position in shaping the future of AI assurance, which will be of increasing importance to economic competition and geopolitical security. Just as we have taken a leading role on the international stage in the creation of standards in domains such as blockchain and cybersecurity, Australia can also influence the development of a mature AI assurance framework.
Three actions government can take now to address the AI trust deficit
The ability of AI and ADM to augment our existing processes and systems to deliver simple, smart, digital support can help completely reframe how public sector services are delivered – moving away from a departmentalised approach to one that is connected, personalised and people-centred, delivering greater benefits for government and people. If Australia is to truly reach its ambition to be a top-10 digital nation by 2030, we urgently need to accelerate this potential, through adoption of a consistent nationwide risk-based regulatory framework, underpinned by robust assurance and focused on positive human impacts. Not only will this build trust in AI and ADM systems, it will help allay fears that technology will dehumanise the public sector and create confidence in its ability to do just the opposite – create an effective digital government delivering better outcomes for citizens. We believe that taking three steps now can help Australia’s government achieve this:
- Set an example for the private sector: Understand what AI and ADM technologies are currently in use today within government, and assess whether they are truly serving the needs of citizens.
- Establish of a central regulatory agency with authority to introduce binding market regulation for AI and ADM.
- Introduce national risk-based regulation for AI and ADM, underpinned by a clear assurance framework.