As AI gains human traits, will it lose human trust?

Autores
EYQ

EYQ is EY’s think tank.

By exploring “What’s after what’s next?”, EYQ helps leaders anticipate the forces shaping our future — empowering them to seize the upside of disruption and build a better working world.

Gautam Jaggi

EY Global Markets EYQ Global Insights Director

Analista de EYQ explorando lo que sigue. Áreas de enfoque: disrupción, megatendencias, economía del comportamiento, salud, futuro del trabajo, IA. Apasionado por la fotografía, los viajes y la música.

9 minutos de lectura 25 ene. 2019

Mostrar recursos

Businesses today are facing the unprecedented challenge of embracing automation and technological innovation, without eroding consumer confidence.

Trust in organizations and institutions is at an all time low. Organizations need behavioural design if they want to automate, innovate and maintain their consumers’ trust.

In an environment of falling trust and growing skepticism, consumer engagement will require a new approach to design that is informed by human psychology. Design is no longer just the realm of creative types. It is instead a strategic imperative that drives trust and growth, and it needs focus from the c-suite.

Companies need to understand the psychological underpinnings of consumer anxieties and use design to address them.
Amy Brachio
EY Global and Americas Advisory Risk Leader

How human augmentation will influence human behaviour

While design and behavior have always been linked, the connection is gaining a new significance thanks to the next-generation of technologies. These innovations — including artificial intelligence (AI), robots, autonomous vehicles (AVs), augmented reality (AR) and virtual reality (VR) — are unique in two key ways:

  1. They will be unprecedentedly autonomous, acting on our behalf without direction from us
  2. They will be compellingly lifelike, emulating human attributes with increasing sophistication

Consequently, this next-generation of technologies will be unlike any we have seen so far. They will enable a new era of human augmentation, in which technologies look like us and act like us, often without our input.

Human augmentation technologies will be game-changing for companies and their customers. They could open up new ways of engaging consumers — from conversational interfaces that replace keyboards to digital assistants that autonomously make purchasing decisions — and create a new generation of empowered “super consumers.”

But the era of human augmentation will have even bigger consequences for human behavior. As technologies blur the line between human and machine, they will get very close to the core of what makes us human. In doing so, they will trigger emotional responses that are hard wired in our psychology. For instance, millions of years of evolution have predisposed us to react to human faces in certain ways. This will have huge implications for how consumers respond to convincingly lifelike robots — an issue that has not been a concern with prior technologies.

For companies, it will be critical to understand these behavioral biases and design for them. This is the realm of behavioral design.

Adopt behavioral design to increase consumer trust in new technologies

Behavioral design is design that is informed by insights from human psychology. Armed with these insights, companies can create products, features, interfaces and messaging that accounts for the cognitive biases that human augmentation technologies are likely to trigger. Here are some examples:

We are predisposed to fear new technologies

Automation is already sparking fears about everything from job losses to AV safety to the prospect of AI becoming self-aware and threatening humanity. While any new technology certainly creates some risks, several cognitive biases predispose humans to overestimate such threats.

Probability neglect leads us to focus on the magnitude of outcomes (e.g., dying in a car crash) rather than their associated probabilities (e.g., automated vehicles are statistically safer than human drivers). To the extent we do process probabilities, we tend to overestimate small chances.

The availability heuristic leads people to focus on, and exaggerate the importance of, readily available information. So, the barrage of news coverage about a single AV crash while in automated mode drowns out a sea of underlying data about AV safety.

Such fears are already being triggered by AI and AVs. Expect more as technologies such as passenger drones and brain-machine interfaces come into their own.

Key questions to consider:

  • How will the design of our next-generation products and services intersect with human psychology?
  • How could our messaging and marketing strategy mitigate the fear of new technologies?
  • Do we know how to address safety concerns at the emotional level rather than through just disseminating safety data?
  • How could we leverage the power of social groups and networks to boost trust in our offerings?

Control is important

“The human brain is built to control its environment — it’s a key motivator that drives us,” says Tali Sharot, Professor of Cognitive Neuroscience and an EYQ Fellow at EY. “Having a sense of agency and an opportunity to make a choice triggers a reward signal in our brain, similar to eating a piece of chocolate, while the loss of control can trigger anxiety.”

It’s not surprising, therefore, that the illusion of control bias predisposes us to want to feel that we have control even in situations where we don’t. The “door close” button in many elevators, for instance, does not affect how soon elevator doors shut — it merely gives users a sense of control.

This aspect of human psychology will become increasingly relevant as human augmentation technologies start acting on our behalf. For instance, AVs could in theory enable a complete redesign of automotive cabins to look more like living rooms or gyms, but the need for control might instead mean that steering wheels and brake pedals have to be retained.

The human brain is built to control its environment — it’s a key motivator that drives us. Having a sense of agency and an opportunity to make a choice triggers a reward signal in our brain, similar to eating a piece of chocolate, while the loss of control can trigger anxiety.
Tali Sharot
Professor of Cognitive Neuroscience and EYQ Fellow

Key questions to consider:

  • As AI becomes more prevalent, what decisions are our technologies starting to make on behalf of consumers?
  • In which spaces do consumers have the most anxiety about losing control, and are there spaces in which they welcome it?
  • How could we incorporate meaningful informed consent to give customers a greater sense of control over their data and its usage?
  • Can we demonstrate the specific benefits that consumers gain in exchange for relinquishing control, and do we understand how they feel about these trade-offs?

Lifelike interfaces trigger human psychology

As AI assistants, robots and VR become increasingly lifelike, they could trigger cognitive biases that designers will need to keep in mind.

We have a deep seated tendency to anthropomorphize (attribute human-like qualities) to inanimate objects. Designers have long used this tendency (e.g., car grills that subtly evoke a human mouth to generate positive feelings). However, robots and AI assistants will take anthropomorphism bias to a whole new level, with implications for user adoption and engagement.

Anthropomorphic design insights are already emerging. For instance, studies find that digital assistants are more likeable if they make small mistakes instead of operating flawlessly — something known as the pratfall effect.

Another bias, the uncanny valley, leads people to feel repulsed by robots or VR implementations that appear almost, but not quite, human. This suggests that developers might keep products from becoming too lifelike in the short run. (Many expect the repulsion effect to disappear once design becomes indistinguishably lifelike).

Key questions to consider:

  • In which segments are our offerings becoming more lifelike than ever before?
  • How can we use anthropomorphic design to boost positive feelings towards our products and services?
  • Are we using market research and human psychology to guide decisions about how lifelike our products and services should be? 
  • Do we understand how consumer attitudes toward lifelike technologies vary across demographic groups and geographies — and how this affects our growth plans? 

Use behavioral design to build trust

All of this has huge implications for companies using next-generation technologies to develop products and services.

“Breakthrough technologies such as AI and robots could boost standards of living and create new growth opportunities for companies,” says Amy Brachio, EY’s Global Risk Advisory Leader. “But new technologies can also bring new risks — in this case, the prospect of consumer backlash and resistance. Companies need to understand the psychological underpinnings of consumer anxieties and use design to address them.”

For CEOs and boards, this requires rethinking business models, corporate structures and controls. Design needs to be considered at the highest levels of the firm and much earlier in the development cycle.

Key questions to consider:

  • Do you need behavioral scientists in the boardroom?
  • How could behavioral design inform due diligence on your acquisitions and alliances?
  • Where does behavioral design expertise sit within your organisation? Is it integrated or outsourced?
  • How are all aspects of the business, from R&D to business development to marketing, being challenged to stay at the cutting edge of behavioral design?

There is a lot at stake here, both for corporations and individuals. Behavioral design affects every company. It applies not just to AVs and robots, but to everything from data mining to social media. It extends to every aspect of the value chain, from R&D to sales. More broadly, whether we resist new technologies or embrace them — and whether they make our lives better or worse — will depend on behavioral design. In a world of humanoid machines, we need, more than ever, to design for human psychology.

Innovation Realized hanging chairs

Innovation Realized

At the Innovation Realized Summit 2019, we convened brilliant minds to collaborate on how to solve the now, explore the next and imagine what's after what's next.

Discover more

Resumen

In an environment of falling trust and growing skepticism, consumer engagement will require a new approach to design that is informed by human psychology. Design is no longer just the realm of creative types. It is instead a strategic imperative that drives trust and growth, and it needs focus from the c-suite.

Behavioral Design is one the 10 disruptive forces identified in the EY Megatrends report. To find out more and download the EY Megatrends report in full, please click here.

Acerca de este artículo

Autores
EYQ

EYQ is EY’s think tank.

By exploring “What’s after what’s next?”, EYQ helps leaders anticipate the forces shaping our future — empowering them to seize the upside of disruption and build a better working world.

Gautam Jaggi

EY Global Markets EYQ Global Insights Director

Analista de EYQ explorando lo que sigue. Áreas de enfoque: disrupción, megatendencias, economía del comportamiento, salud, futuro del trabajo, IA. Apasionado por la fotografía, los viajes y la música.