4 minute read 22 Jun 2021
Woman standing infront of door

Why successful adoption of AI will build on a foundation of trust

By Frank De Jonghe

EY EMEIA Financial Services Quantitative & Analytics Services Leader

Pragmatic. Outcome driven. Lateral thinker. Architect. Wants to see end to end. Needs the challenge of change. Enjoys gardening and cooking.

4 minute read 22 Jun 2021

Show resources

  • Making Artificial intelligence and machine learning trustworthy and ethical

Building AI solutions that inspire confidence requires a multifaceted approach across scientific disciplines, industries and government.

In brief
  • As businesses harness Artificial Intelligence (AI), it is important to look at potential risks and build AI that is trustworthy and beneficial to society.
  • AI systems possess human-level perception capabilities like image recognition and speech-to-text, boosting humans’ capabilities to make informed decisions.
  • Such a technology also raises concerns about making decisions aligned to humanistic and societal aspects, including the wellbeing of people exposed to AI.

In the early 19th Century, the Luddites were a secret oath organization that destroyed textile machinery in industrializing England, and subsequently entered the history books as an example of anti-progress feelings in society. Algorithms exploring the value of big data to support decision-making in complex settings, combined with intelligent automation (IA) leveraging breakthroughs in Artificial Intelligence (AI) in such areas as Natural Language Processing (NLP) and image recognition, offer significant promise to all areas of economic and social activity. Yet, unless as public service operations and corporates we can develop trust with our citizens, our customers, and our employees on the intrinsic virtues of these technologies, and how they are being used, there is a risk that a Luddite-like backlash delays the effective adoption of promising technology.

Trust is multifaceted, covering important concepts such as data privacy, transparency, accountability and, security of IT infrastructure, that we know well how to operationalize. But it also covers more general humanistic and societal aspects including autonomy of the individual, fairness and absence of bias, and even general wellbeing of people exposed to AI, whether they are aware of it or not.

Because of the latter and pervasive and ubiquitous applications of AI, we have heightened awareness, and even alarm, in society. Regulation is being contemplated, and looking at the huge variety of voices contributing to the current debate, it is likely going to be fragmented.

Findings from a global study by EY in collaboration with The Future Society (TFS) (pdf), show organizations are focused on technical and operationalizable aspects of AI, such as data privacy, while legislators and regulators recognize its societal and humanistic impact.   

Despite these ongoing debates, and the risk they entail for responsible management in both the private and public sector, there are a few basic principles that can profitably be applied to navigate through the challenges, irrespective of the regulation that is likely to come. 

Show resources

  • Making Artificial intelligence and machine learning trustworthy and ethical

Understanding the business purpose

Any application of AI agents supports a goal that is defined, irrespective of whether AI is used to get there or not. In this context, AI is a tool, an instrument within a given activity. By comparison, a medical doctor can perform a more accurate diagnosis with the aid of an image recognition application, but the quality standards we have in society of such diagnosis do not primarily depend on the tools used. In other words, one always needs to ask: “What is the AI agent contributing to the achievement of the stated goal?” The more dependent the process is on the AI agent in achieving the goal, the higher the subsequent governance and monitoring requirements will have to be.

Be aware of the exposures

When using AI to assess a candidate applying for a job, or the creditworthiness of a company, it is relatively clear where an algorithm intervenes. However, an AI agent may be so immersed in a process, that we hardly realize that it is there. Think for instance of style and spelling autocorrect as you type out an email. In fact, a lot of AI research is actually investigating how to make the human-robot or AI interaction so natural that there is no friction in the mind of the human in the loop. It is precisely on this boundary that the human autonomy challenge comes to the fore.

In order not to be taken by surprise, it is therefore important, from the outset, to keep an accurate inventory of where AI applications are being used. 

Define control objectives explicitly

The purpose of the AI defines the risks. An image recognition application of types of fruits in a supermarket check-out counter may create erroneous grocery bills, but this is significantly less important than facial recognition used in a bank’s automated client onboarding process, or a job recommender system in a government’s labor agency. On the basis of the exposure inventory mentioned above, it should be possible to differentiate the risks of the different applications and define appropriate mitigating controls. These can include automated performance monitoring of the AI, but also involve human intervention at appropriate junctures in the process.

Deep dive validation reviews

When the application is so customized, or the risk it poses to the process owner so important, a bespoke algorithm or model validation procedure can be of use. Such a review would look at training data sets in detail, as well as the algorithm used, its implementation, and the reasonableness of the outcome. And of course, all this would be judged through the lens of the business purpose and the control objectives to determine the degree of success.

In short, while trust may be a subjective concept, the process to build and nurture it is well-recognized. Users of AI and people exposed to AI must be educated on its potential flaws, and it must be clear for all that the necessary efforts are made to address these on an ongoing basis.

Summary

AI is a powerful technology with immense potential for positively impacting our lives; however, concerns around trust and accountability of AI could affect its adoption. The need is to build an AI system that can bridge the trust gap and create ways to benefit the economy and society. 

About this article

By Frank De Jonghe

EY EMEIA Financial Services Quantitative & Analytics Services Leader

Pragmatic. Outcome driven. Lateral thinker. Architect. Wants to see end to end. Needs the challenge of change. Enjoys gardening and cooking.