EY Megatrends

How reframing trust allows firms to navigate change and unlock growth

Temas relacionados

Trust is the new currency fueling growth, innovation and resilience. Are you prepared?


In brief

  • In an increasingly uncertain world, trust is emerging as a strategic lever for growth, resilience and competitive advantage.
  • Organizations investing in trust will better adapt, innovate and thrive — outperforming peers in talent retention, customer loyalty and crisis recovery.
  • Successfully building trust requires new approaches to governance, technologies, business models and leadership behaviors.

This article is part of the second set of insights in the new EY Megatrends series New frontiers: The resources of tomorrow

It’s a Tuesday morning in the first week of April, sometime in the next decade. Amara enters the global headquarters of her employer, a US-based multinational food and beverage company. As she heads to a seventh-floor conference room, she thinks back on the journey to this day.

Five years earlier, the CEO had recruited her to be one of the first of a new breed of C-suite leaders: a Chief Trust Architect. Unlike the Chief Trust Officers of the 2020s, who had overseen areas such as data privacy, cybersecurity and compliance, her remit was broader and more strategic. She would be responsible for investing in trust as a strategic asset. Under her stewardship, the company would manage, measure, grow and monetize trust, making it a key value driver in an uncertain world.

The timing turned out to be prescient. In an environment of increasingly nonlinear, accelerated, volatile and interconnected change — the NAVI world — their sector was soon rocked by successive shocks. Whiplash weather fueled by accelerating climate change triggered multiple crop failures. Food riots broke out. Supply chains broke down. Refugees streamed across borders. Consumers, already reeling from persistent inflation, were hit by double-digit food price increases. Deepfake videos of CEOs making inflammatory statements went viral. Consumers and politicians vented their anger on food and beverage multinationals. Big Food’s reputation plummeted, following the trajectory of Big Tobacco and Big Pharma in decades past. 

Yet, even as trust in its competitors plunged, her company held up relatively well, thanks to its investments in increasing trust. The decision to put all its suppliers on the blockchain provided end-to-end supply chain transparency and advance warning about shortages and bottlenecks. Digital watermarking technologies helped stamp out deepfakes before they gained traction. Risk management innovations such as real-time monitoring and risk assessments, as well as war gaming build resilience and agility in leadership teams. 

When she had accepted the position, she had asked the CEO for one commitment: complete transparency from the leadership team. This principle proved invaluable as crises hit. Unlike many others, her company was open with information — good, bad or ugly — which only strengthened its credibility at a time of profound uncertainty. Its stock price held up well, rebounding quickly from any negative news because of the “trust premium” it built up with investors. 

The company didn’t just survive this volatility: it invested more aggressively and confidently than competitors in artificial intelligence (AI). Here, too, the trust-first approach paid dividends, leading to faster adoption. 

 

All of which led to this day: the morning of the latest quarterly earnings call. On it, the company planned to announce the public rollout of another of her innovations: the Trust Index. For several years, the Index had combined data from a range of disparate sources, allowing the company to track how trust was changing in real time and measure the ROI on trust investments. Starting this quarter, the Trust Index would be more than an internal metric; the company would include an update on it on every earnings call. 

 

Welcome to a day in the future of the Trust Economy. 

A worker inspects a rough diamond in a workshop in India. Photographer: Dhiraj Singh/Bloomberg
1

Chapter 1

The Trust Economy

Trust is becoming scarcer and more valuable. Can you reframe it as an asset instead of a challenge?

Over the last few decades, technology has increased the value of a series of intangible resources — virtual assets drove competitive advantage and market power, the way the race for key natural resources did in prior industrial revolutions:

  1. The Data Economy: The first of these revolutions was data. The IT revolution transformed data from a scarce asset to an abundant one. Digitization produced vast amounts of data and the internet democratized access to much of it. “Data is the new oil” became an oft-repeated mantra. In the Data Economy, companies succeed by extracting value from this resource at scale, which requires the ability to produce it, understand it and monetize it — the very capabilities companies will increasingly need with respect to trust.
  2. The Attention Economy: In the Web 2.0 era of social media, attention became the key intangible resource and value driver. Companies built offerings and business models around engagement, as the ability to capture users’ attention and shape their behavior became huge drivers of value.
  3. The Trust Economy: We are now on the cusp of the next big shift, in which trust will become the new intangible resource that will drive competitive advantage and growth. Trust has been diminishing in recent years — ironically, as a by-product of the Attention Economy that preceded it. The attention merchants of social media optimized their algorithms to maximize user “engagement” but often increased polarization and fueled misinformation along the way.

“There are four key components for achieving success with AI: data, trust, value and adoption,” says Joe Depa, EY Global Chief Innovation Officer. “Data is the critical foundation; there is no AI without data. But equally foundational is trust — how do I trust the data, and how do I trust the AI? Without trust, you undermine the utility of your data and you hamper your ability to get to value and adopt.”

 

Indeed, declining trust undermines the value of the other intangible resources: data and attention. A major driver of diminishing trust has been concerns about data privacy and misuse, which makes consumers less willing to share their data and more suspicious of attempts to nudge their behavior.1

The shift to the Trust Economy does not mean every aspect of declining trust will be fundamentally reversed. It merely means that trust will become increasingly in-demand and valuable. Much as the rise of the Attention Economy coincided with shrinking attention spans2, the Trust Economy will emerge against a backdrop of declining trust. Organizations that insulate themselves from this decline and grow trust in their organizations, brands and market offerings — converting trust from a challenge into an asset that can be measured, tracked, invested in, grown and monetized at scale — will position themselves to survive and thrive in the Trust Economy.

Data is the critical foundation; there is no AI without data. But equally foundational is trust — how do I trust the data, and how do I trust the AI?

The loss of trust is already a constraint leaders are running into, with real consequences for companies. EY research finds that trust is the top factor consumers consider when deciding whether to use a product or service — ranking higher than five other considerations, including a company’s brand or a personal recommendation.3 Analysis by The Economist magazine found that a company loses 30% of its value when it loses trust.4


In an environment of widespread trust deterioration, standing still is not an option. Leaders have two choices. You can act now to reframe trust as a strategic asset for competitive advantage. Or you can attempt to proceed with “business as usual” and watch as the declining trust environment degrades trust in your organization and your offerings.

FULL FRAME SHOT OF CAMERA ON WALL
2

Chapter 2

Declining trust is a key challenge of our time

Trust in institutions, technology and society is waning across much of the world.

Trust has been on a downward trajectory in recent years across several dimensions. These can be grouped into three major categories: trust in technology, social trust and trust in institutions.

Trust in technology

Trust in technology started declining in the latter half of the 2010s, as a growing “techlash” emerged in response to data violations during the 2016 US Presidential election and UK Brexit referendum. In 2016, technology had been the most trusted sector in 90% of countries tracked by Edelman. By 2024, that number had fallen to 50%.5

Trust in technology could decline even further with the next generation of technologies. There is a worrying 26-point gap between trust in the tech industry and trust in AI.6

“We are in a time in which technology is being implemented at increasing speed, which creates anxiety,” says Jeanne Boillet, EY Global Accounts Committee Assurance Lead. “People get frightened by what they don’t understand. They can see change is happening, but they don't understand what it means for them. Bridging this gap requires companies to adhere to Responsible AI principles with transparency. It also requires individuals to familiarize themselves with new technologies, much as they would train for a sport or musical instrument.”

Indeed, AI is giving rise to new worries and concerns, ranging from anxiety about job losses to concerns about impacts on privacy, social inequities and misinformation.7 EY research finds significant gaps between C-suite leaders and consumers on these issues.


These concerns include the impact of AI on not just office jobs but also creative and entrepreneurial activities. “AI could undermine people’s rights and opportunities in ways that social media didn’t,” says Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh. “Social media spawned a creator economy and unleashed entrepreneurial spirits. AI could threaten people’s ability to make a living from talents and businesses they’ve spent decades cultivating. In the worst cases, it could even extract their creations — without permission or credit — and then monetize and replace them. All of this fuels mistrust and anxiety.”

The continuous evolution of AI could create new challenges, such as the prospect of newer models becoming less reliable. With the size of data sets used to train AI models doubling every six months,8 this is leading to concerns about diminishing data quality.9

 

“To deliver enhanced capabilities, the newest AI models need large amounts of data,” says Cathy Cobey, EY Global Responsible AI Leader, Assurance. “Unfortunately, sourcing so much data sometimes requires relaxing data quality controls. There simply isn’t enough time to curate all the data, manually process it and conduct robust content moderation. This creates an increased risk that AI models are trained on fake and questionable data, making them less reliable and trustworthy.”

AI also promises to fundamentally change social media.10 Instead of creating connections between people, AI will increasingly enable social connections between people and synthetic personas, such as AI agents and avatars. 

“People already use AI as confidants and companions,” says Vallor. “In the future, they may be encouraged even more to use AI tools for care and support. This will damage trust, in ways we haven’t even started to appreciate. When people have spent untold hours sharing their problems with an AI bot that never challenges their framing, never pushes back and never inserts its own needs, this will cause real problems when these same people need other people to help them through their problems, because they will no longer be habituated to social interdependence. Overreliance on technology could end up fracturing interpersonal trust.”

Social trust 

Ethan Zuckerman, Director of the Institute for Digital Public Infrastructure at the University of Massachusetts at Amherst and author of Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them, points out that trust in government and social institutions, at least in the US, has been falling for a long time. 

“Social trust has declined over decades,” says Zuckerman. “After the Nixon Presidency, the media became less deferential to power and people grew unwilling to accept superficial answers. A similar decline started after 11 September (2001) and the Iraq War. Today, only 15% of Americans believe the government does the right thing all or most of the time. Some amount of mistrust is healthy – skepticism is essential for a functioning democracy and economy – but you don’t want to get to the point where mistrust becomes paralyzing and corrosive.”

This decline has accelerated in recent years, catalyzed in part by technological trends. Social media has played an active role in creating “filter bubbles” that have driven up polarization. While much attention has focused on polarization in the context of US politics, analysis shows that the problem is global and that it has been getting worse. 


AI has the potential to industrialize the production of misinformation.11 “While AI is a tremendous force for good, it also has a dark side,” says Sinclair Schuller, EY Americas Responsible AI Leader. “It could take our misinformation problem to a whole new level, dwarfing anything we’ve seen in the social media era.” 

Trust in institutions 

Trust across a wide swathe of institutions — including governments, corporations, media, educational institutions, science and academia, NGOs, and international organizations — has been flagging across multiple countries. 

Trust in institutions varies significantly globally. With some exceptions, institutional trust is highest in low-income countries, while it is lowest in most high-income countries.12  The good news for business leaders is that, against a backdrop of declining trust in institutions, business has held up (relatively) well. Across multiple countries, business remains the most trusted institution.13

However, trust in business and trust in “my employer” fell in the most recent Edelman survey.14 Though the declines were not huge, they may be a warning sign, particularly since this was the first time “my employer” declined in 26 years of Edelman trust surveys.15

feeding the flying seagull
3

Chapter 3

Investing in trust

To thrive in the Trust Economy, focus on governance, technology, business models and leadership behaviors.

The trust challenge articulated in the previous chapter is massive and complex. Aspects such as polarization and misinformation are well beyond the ability of any individual organization to solve and will likely require collaborative approaches involving policymakers, regulators and industry. Focus instead on what directly impacts your organization and what you can influence or change. For most, this means influencing the trust your key stakeholders — customers, employees and investors — have in your institution, brands and technologies. 

Next, identify the attributes that will lead these stakeholders to view your organization as trustworthy. Across multiple studies, three common themes appear repeatedly, which form the pillars of trust:

  • Competence: People trust organizations and leaders based on their ability to effectively perform their roles and their tendency to do what they say.
  • Integrity: People trust organizations and individuals that appear honest and fair in their actions.
  • Benevolence: People trust entities that demonstrate they care about others’ well-being, not just their own self-interest.

The relative weight of these pillars can vary across companies and sectors. Consumers might place higher importance on an airline’s competence while giving its benevolence less weight. Conversely, with an educational institution, trust may be driven more by benevolence — knowing schools and teachers are motivated by students’ welfare rather than by financial gain. 

A commonly used construct, the trust equation, brings these elements to life: Trust = (Credibility + Reliability + Intimacy) / Self-Orientation. By strengthening credibility and reliability, organizations demonstrate competence and integrity. By fostering intimacy and reducing self-orientation, they signal benevolence and stakeholder focus. 

To strengthen these pillars, companies and leaders will act across four dimensions: people and psychology, policies and processes, profit mechanisms and protocols and platforms.

People and psychology: the human quotient

“Our preferred state as a species is one of reciprocal trust,” says Social Psychologist Heidi Grant, Director of Behavioral Science & Insights at Ernst & Young LLP, in the US. “We want to be trusted and trust others. We want to be helped and help others.”

Indeed, evidence from multiple disciplines — including neuroscience16, behavioral science and evolutionary psychology17 — suggest the propensity to trust is hardwired in us, conferring an evolutionary advantage through human history. 

Since trust is so fundamentally human, impersonal policies and protocols can only go so far. The most important part of building trust is the human component. Organizations can learn from disciplines such as behavioral science to understand the psychology of trust.

Several behaviors can build trust by demonstrating competence, integrity and benevolence:

  • Transparency. In trust, perception is everything. “Human behavior is ambiguous by nature,” says Grant. “It is open to multiple interpretations. Unfortunately, our brains are wired to interpret something that’s ambiguous in a negative way and assume the worst. So, it’s important to not just be trustworthy, but to be seen as trustworthy. This means being transparent, which can help signal qualities like integrity and benevolence.”

  • Vulnerability and intimacy. For companies — and particularly leaders — a key component is vulnerability. Doctors who admit they’ve made a mistake are less likely to be sued for medical malpractice, because vulnerability builds trust, but corporate lawyers are reluctant to allow this.18

    Business leaders are conditioned to display confidence and to always appear to have all the answers. They may shy away from being fully candid because they don’t want to make their employees anxious, or their instinct is to wait until they have everything perfectly figured out before saying anything in public.

    Yet, leaders may find reassurance in behavioral science insights about how we respond to adverse information and outcomes. “We often think human beings can’t tolerate negative outcomes,” says Grant. “But research on what behavioral scientists call ‘procedural justice’ shows that people may tolerate a negative outcome, but not an outcome that was arrived at in an unfair way. So, don’t worry about your stakeholders disliking a decision; instead, be transparent about how decisions were made and demonstrate they were made in a just way.”

  • Inclusivity. One often overlooked way to create transparency is by bringing consumers into the tent. “When many companies think about using technology to engage with customers, they default to social media platforms and influencers,” says Zuckerman. “But a more useful technological approach is open-source software. Linux and Wikipedia demonstrate people love projects they can be a part of. How could companies and brands create truly participatory projects and harness them to build inclusivity and trust?”

  • Accountability. “After a trust breach, it can be critical to immediately apologize and assume ownership,” says Peter Kim, Professor of Management and Organization at the University of Southern California and author of How Trust Works: The Science of How Relationships Are Built, Broken and Repaired. “However, my research finds a second dimension: whether people perceive the incident as a failure of competence or integrity. People are generally forgiving of competence breaches. But if a company apologizes for what consumers see as a matter of integrity, the apology can backfire. People interpret the apology as an admission that you are dishonest, which is not as readily forgiven.

    “The key insight for companies: If a competence failure might be misinterpreted as a lack of integrity, it is essential to correct that misapprehension,” Kim says. “Assessments by independent third parties — such as investigators, regulators, auditors or licensing boards — may help do that.”

Related article

    Profit mechanisms: business models

    Social media’s role in diminishing trust is a predictable result of its business model. Since these firms make money by keeping users on their platforms and collecting behavioral data, their business models are built to maximize user engagement. We have since discovered, to our collective dismay, that if you let algorithms loose with the goal of maximizing user engagement, they converge on fueling screen addictions, misinformation and polarization. Incentives and business models drive organizational behaviors and outcomes, and leaders should be thoughtful about the business models they build around emerging technologies such as AI.

     

    The good news is that AI may steer clear of some of social media’s perverse incentives. If social media was in the business of user engagement, AI is in the business of accurate and reliable insight and action. Social media made money by maximizing clicks and likes, but AI will make money by maximizing productivity, creativity and new sources of growth. In the AI era, misinformation and screen additictions will be unambiguously bad for business. 

     

    This does not mean that leaders should grow complacent. While AI business models may steer clear of the specific trust issues of social media, they may still raise new trust-related risks. As Vallor pointed out earlier, for instance, overreliance on AI in some instances could diminish interpersonal trust.

     

    As firms begin to tap into AI’s real potential — disrupting and fundamentally reinventing business models — they should deliberately identify and address any latent trust-related issues.

    Related article

      Policies and processes: governance and risk management

      Trust is built up slowly but can be decimated in a flash.

       

      “Trust often plummets after a trigger event,” says Zuckerman. “Trust in the US medical system shifted significantly after the rise of managed healthcare and HMOs — approaches that restricted choice and were often perceived as putting profits over patient well-being. Trust in banks plummeted after the financial crisis of 2007–08. Trust is easy to destroy; it's very hard to get back.”

       

      The implication for business leaders is that you may be just one breach away from a trust collapse. Strong governance and risk management are therefore critical backstops to maintaining and building trust.

       

      At the minimum, governance requires regulatory compliance. This itself is a moving target, as expectations and standards ramp-up. For instance, changing regulations in the tax transparency space have created new compliance requirements for many companies.

       

      To build trust, though, governance extends well beyond compliance, with articulating principles that the company stands for and developing policies and controls to put these into practice. A key example is AI, where articulating principles aligned with Responsible AI and instituting governance is a critical foundation for building trust in AI systems.

       

      Independent audits, both internal and external, are important to be certain tech platforms abide by stated principles. A paper coauthored by Nathanael Fast, Associate Professor and Director of the Neely Center for Ethical Leadership and Decision Making at the USC Marshall School of Business and PhD candidate Maya Cratsley provided evidence that AI systems are particularly prone to “inventor’s bias” in which inventors are overly optimistic about the trustworthiness and fairness of their inventions, even when these products underperform.19

      Related article

        “Inventor’s bias can be a problem, because the people who invented the AI system may know the most about it and so the natural tendency is to rely on them for information on how good it is and how it performs against benchmarks,” says Nathanael Fast. “But our research suggests it’s important to use independent third parties to assess how AI systems perform.” Independent assessments and audits of AI systems can help achieve this (pdf).

         

        Finally, investing in risk management and enterprise resilience provides a layer of protection against risks that could precipitate crises of trust. In today’s more complex NAVI operating environment, risk management needs to be aligned with strategy to increase the likelihood of achieving strategic goals and targets amid increased uncertainty and volatility. To enable this, it’s critical for Risk functions to become partners to the business and be integrated in key strategic decisions.

         

        Protocols and platforms: harnessing technology

        Technologies can play a key role in enabling and signaling behaviors such as transparency, inclusivity and consistency. Here are a few examples:

         

        Blockchain and distributed ledger technology: enabling “trustless trust”

        Blockchain is a foundational technology of the Trust Economy, since transparency and security are fundamental to its architecture.

        You don’t need a trusted third party or intermediary to tell you what your balance is — the technology is the third party and it provides a single version of the truth.

        “With blockchain, you don’t have to trust — you can verify,” says Marek Olszewski, co-founder and CEO of Celo, a mobile-first platform working to democratize access to stablecoins. “Distributed ledgers are fully transparent and immutable. You can verify a vendor received your payment by checking whether the transaction is included in the ledger. You don’t need a trusted third party or intermediary to tell you what your balance is — the technology is the third party and it provides a single version of the truth.” 

         

        Distributed ledger technology is already being deployed to build trust in the business world. Maersk’s TradeLens platform increases transparency and security in international shipping.20 DeBeers’ Tracr platform helps the company remove conflict diamonds from its supply chain.21 In real estate, “proptech” startup Propy uses blockchain to streamline the buying and selling of properties with smart contracts, increasing security and reducing the potential for fraud.22

        Explainable AI: addressing AI anxiety through transparency

        One source of growing anxiety about AI is its “black box” problem: the challenge of explaining the decision-making processes of advanced AI models. As these models increasingly make decisions and take actions that meaningfully affect the lives of workers and citizens, there will inevitably be growing calls for transparency into how these decisions are made.

        Solving the problem is challenging, because of the sheer complexity of deep learning models,25 which do not “think” like humans do. The precise logic for how an AI model arrived at specific conclusions remains largely opaque, even to its developers.

        The Explainable AI (XAI) movement is working to tackle the problem. Grandview Research estimates the global XAI market will grow from US$7.8 billion in 2024 to US$21 billion by 2030, a CAGR of 18%.26 In addition to most of the leading tech giants, a host of startups are researching and developing solutions in the XAI space, including H20.ai27 (which has developed a “comprehensive explainability toolkit” to explain AI results) and Fiddler AI28 (which is developing an “AI Observability platform” that allows companies to build “trustworthy, transparent and understandable AI solutions”).

        “As part of Wand OS, we built a Control Panel for AI agents,” says Rotem Alaluf, CEO of Wand AI. “It gives humans full visibility into every agent’s behavior - from high-level decisions down to each token and message generated. Transparency like this is essential for trust. When you can see why an agent makes decisions, at both the macro and micro level and those reasons align with human logic, you build confidence in the Agentic Workforce and in AI systems overall.”

        Emotion AI: building trust through empathy and relatability

        Emotion AI, also known as affective computing, refers to AI models and interfaces that can detect, interpret and appropriately respond to human emotional signals from text, facial expressions, vocal intonation and other inputs. This can build trust by enabling more empathetic and personalized human-machine interactions — from customer service chatbots that respond appropriately to mounting frustration in a customer’s voice, to in-car sensors that detect subtle signals of driver fatigue, or educational software that changes course after detecting a student’s boredom or confusion.

        “Building trust in AI requires a balance of IQ and EQ,” said Sean White, CEO of Inflection AI. “While most of the industry is focused on boosting the raw intelligence of a model’s reasoning and problem solving, we believe that emotional intelligence is just as important. That’s why we built Pi to be an engaging, empathetic, conversational, contextually aware AI agent that combines IQ and EQ to empower people. Pi’s personality is also consistent across versions and that dependability is fundamental to building long-term trust.”

        This is only a partial list of technologies that can play a role in building trust. Others range from authentication technologies for detecting misinformation (e.g., digital watermarking) to zero-knowledge proofs which enable verification while protecting privacy and data security. If technology had a role in fueling mistrust, it can also be part of the solution.

        The “trust stack”

        The dimensions identified above can be thought of as a trust stack, in which governance and risk management form the foundational layer. Then come trust-building technologies, as well as the business models these technologies will enable. At the top of the stack are leadership behaviors, which both set the tone at the top and define the organization’s public-facing interface with its stakeholders.

        Trust stack graphic


        About the authors


        Summary

        In the NAVI operating environment, trust is increasingly scarce and increasingly valuable. Investing in this scarce resource requires bringing a trust-first focus to every level of the trust stack: leadership behaviors, business models, technologies and policies for governance, risk management and compliance. Doing so can position your firm for resilience and competitive advantage in a world of uncertainty.

        Related articles

          Acerca de este artículo

          Autores