Diabetic girl with a CGM sensor on her arm. Father using an app to monitor blood sugar levels.
Diabetic girl with a CGM sensor on her arm. Father using an app to monitor blood sugar levels.

As AI moves from advice to authority, who defines its limits?

The EY Global AI Sentiment Survey finds adoption is being shaped by what people do, not what they say.

In brief

  • People say they don’t trust AI, but their everyday choices show they are using it regardless.
  • Autonomous AI is no longer theoretical: 16% of people have used AI that acts on their behalf in the past six months alone.
  • AI use is moving from low-risk assistance to higher stakes decisions, including health, finance and transport.

When people talk about artificial intelligence (AI) in the abstract, they tend to focus on what they fear. But when asked how they actually use it, a different picture emerges. Across everyday life, people have embedded AI into how they plan, decide and act — helping to map routes, answer questions or resolve issues.

At the edge of this picture, another important shift is occurring. A growing number of people are no longer just asking for advice; they are starting to let AI act on their behalf. Decision-making authority is migrating from humans to systems, and what started as low-risk assistance is now evolving into something far more consequential.

This shift from assistive to autonomous AI is not a distant future scenario. It’s already underway and unfolding faster than most organizations may realize.

Despite public discourse around AI remaining focused around trust, accountability and authenticity, the second annual EY AI Sentiment Study, based on insights from more than 18,000 people across 23 markets shows that AI adoption is accelerating fast, well ahead of confidence. 

Experience suggests that shifts like this rarely happen gradually. As early adopters begin to experiment, they often redefine what feels normal. What once seemed radical can become routine almost overnight.

For leaders, the implication of this shift is clear. The question is no longer whether people will accept autonomous AI. It’s how quickly that acceptance will spread, which users and markets will move first and whether organizations will be prepared to meet them.

As EY Global CEO Janet Truncale puts it: “As AI becomes more capable of making decisions on our behalf, trust is not a ‘bolt on’ advantage, it’s a ‘built in’ necessity. When AI is designed with clear guardrails and continuous oversight, organizations can expand autonomy with confidence.”

This report explores how everyday choices are accelerating the move from assistance to autonomy and what business leaders must do now to identify how to build confidence, where boundaries can be pushed and where to set a deliberate pace in a world that could become autonomous faster than expected.

Unmanned taxi in Abu Dhabi, UAE
1

Chapter 1

A meaningful minority is already letting AI decide

People are allowing AI act on their behalf – a shift too big for companies to ignore.

Autonomous AI is no longer theoretical. In the past six months alone, 16% of people globally say they have already used AI systems that act on their behalf, without human intervention. And this delegation is showing up in everyday decisions. Ten percent of people report using AI agents to buy products on their behalf. Eleven percent let AI automatically refill shopping carts and make purchases. Another 11% allow AI to manage their finances and carry out banking tasks without intervention. And 9% have used a self-driving vehicle or taxi. 

What began as AI that advises is becoming AI that acts.

For those who have not yet used AI autonomously, openness to do so is clearly visible. When asked to think about possible future scenarios, people’s preference for AI that acts for them is strongest in familiar, low risk moments. Rather than AI acting as an assistant or making recommendations, more than a third of respondents would prefer AI to automatically apply discounts at checkout (36%) or have an AI assistant contact customer service to fix issues without intervention (34%).

And interest doesn’t stop there. Even in higher-stakes situations, a meaningful share of people is willing to let AI take the lead. Thirty percent of respondents would prefer a home security system that automatically locks doors and notifies authorities of unusual activity. Others are open to an AI assistant automatically booking doctor’s appointments (21%) or having an AI avatar join meetings online while they are away (19%).


These behaviors may still represent a minority, but they are large enough to matter. Importantly, interest extends well beyond current users of autonomous AI. For instance, over a third of people who do not use AI today would trust autonomous AI to resolve service issues, exemplifying how quickly a meaningful minority can tip into a new mainstream expectation once autonomy proves reliable.

For leaders, the earliest signals of change are already visible, meaning it’s time to pay attention and prepare for what is likely to come next. Those that don’t will risk being surprised by shifts that feel sudden, only because the early signals on where autonomy is accepted were missed.

Australia-based Lendi Group is already responding to this shift. In 2025, the company launched Lendi Guardian, an always-on AI-powered home and loan digital companion that actively monitors mortgage rates and equity changes, automatically pre-fills out refinancing applications when rates drop and appraises customers along the way. By moving from recommendation to execution, the service increased platform engagement by 40%.

The shift to autonomous AI is not happening at the same speed everywhere. While early adopters appear in every market, some are further along the journey. In those which we call Pioneer Markets, AI use is broader, more frequent and more deeply embedded in everyday life, creating the conditions for delegation to happen sooner and at greater scale. Within Pioneer Markets, 94% of people report using AI and nearly a quarter have used autonomous AI. These users are not simply experimenting more, they are relying on AI more often, across greater contexts and with greater confidence in its role in daily decision making.

Pioneer Markets

Transitional Markets

Lagging Markets

India 

Singapore

Canada

China

Italy

Sweden

KSA

Denmark

Finland

UAE

Ireland

UK

Mexico

USA

Australia

Brazil

Norway

New Zealand

Hong Kong SAR

Germany

Japan

South Korea

France

What distinguishes these markets is experience. People in Pioneer Markets are more likely to have hands-on familiarity with AI and to have received meaningful training or education. They feel more comfortable sharing data to enable personalization, which in turn reinforces their more positive experience.


By contrast, other countries fall into two broad groups. In Transitional Markets, AI use and interest is growing but lower confidence makes people are more cautious about delegating tasks. Excitement exists but is still emerging. In Lagging Markets, there’s greater hesitation with people using AI more selectively and questioning its relevance.

These differences point to varying levels of readiness. In Pioneer Markets, more advanced regulatory frameworks, cultural norms and organizational readiness create stronger conditions for autonomy to expand. For global companies, this creates a deployment dilemma as a single model designed for one context will likely fail in another.

Pioneer Markets offer an early view of how quickly AI adoption can happen when use, trust and capability advance together. These markets and early adopters are not edge cases to be contained; they are previews of how quickly norms can move once confidence takes hold.

Leaders must pay heed to a new strategic risk. Not that autonomous AI will arrive before people are ready, but that it will arrive before organizations understand where people are willing to go, what gives them confidence, and how quickly expectations can change.

Closeup of two business people sharing a smartphone and a laptop, using virtual assistance and AI applications.
2

Chapter 2

AI has become everyone’s personal assistant

Everyday comfort lays the groundwork for assistance with higher stakes tasks.

When it comes to everyday people, AI’s biggest breakthrough hasn’t been intelligence but convenience. By taking care of small, everyday tasks, AI has slipped into daily routines with little resistance. Route planning, movie suggestions and customer support are activities that people expect technology to handle. These are tasks where outcomes are easy to review, correct or override and that sense of control matters.

The 2026 survey shows that experiences like these are the building blocks of confidence. As people interact with AI in ways that feel helpful and low effort, their hesitation gives way to acceptance. AI no longer feels novel or experimental, it’s simply useful.


AI’s growth is not evenly distributed across all activities, but the direction of travel is clear. In most cases, people are not seeking innovation for its own sake. They are looking for faster service (57%), lower prices (57%) and products and services that work better (42%). Interestingly, people’s desire to make fewer decisions is not such a motivating factor (25%).


In retail, AI is increasingly being used to help people articulate intent rather than navigate filters. Instead of searching by keywords or categories, people can describe what they are trying to achieve and let AI narrow the field.

Last year, Nike launched NikeAI beta in their mobile app in the US. The updated app allows customers to describe what they want to do, such as train for a race, equip a team, or find products that meet specific preferences. AI in the app interprets that intention to make the best possible recommendation. The experience is designed to be simple, intuitive and easy to override, using large language models (LLMs) customized with Nike’s domain knowledge. By giving customers support when they need it but leaving final decisions firmly in their hands, assistive AI reduces friction, saves time and builds confidence without introducing risk.

Such situations may appear low stakes but are strategically important. Everyday interactions such as these help people learn whether a system feels reliable, understandable and aligned with their needs. They also allow organizations to gain early insights into where confidence is building, and where it can break down. For leaders, it means that they can use lower-stakes, assistive-AI experiences as a diagnostic tool. The instances reveal which tasks people are willing to hand over, what safeguards they expect and how quickly comfort can grow once value is proven.

Health-related uses stand out because they begin to move beyond convenience into more consequential territory. Many people now use AI to describe symptoms, ask what might be wrong or decide whether to seek care. Roughly 230 million people ask health-related questions of ChatGPT each week across the globe.1

AI-powered symptom assessment apps like Buoy Health allow users to describe symptoms in their own words and tap into clinical insights. Buoy offers guidance toward what kind of care users may need. These tools are increasingly shaping real-world health decisions, including whether and when people enter formal care pathways.

What’s striking is how quickly AI is becoming a first stop for health questions.

Access to these kinds of tools helps explain why 26% of survey respondents report asking AI to diagnose symptoms, up from 19% last year. The same number (26%) use AI to look up health information without seeing a doctor. “What’s striking is how quickly AI is becoming a first stop for health questions,” says Kim Dalla Torre, EY Global Health Sector Leader. “As people begin to rely on it to guide real decisions, the need for safe, accurate and easily validated AI becomes far more important.”

Organizations should not rush toward full autonomy, but they should pay close attention to experiences that are already shaping expectations. It’s these moments that will determine how far, and how fast people are willing to go.

High angle view of worried male tech expert sitting at desk with colleague at office
3

Chapter 3

Why trust is lagging behind AI capability

AI use is racing ahead of people’s confidence in how it is governed, controlled and accounted for.

As AI becomes more embedded in everyday life, and in some cases begins to act autonomously, trust has not kept pace with capability. People remain concerned — even in Pioneer Markets — about security, control, accountability and authenticity. But interestingly, these concerns are not slowing adoption. They are shaping the way people want autonomy to show up — a critical distinction for leaders navigating the next phase of AI adoption.

Security is people’s biggest fear. Two-thirds worry about AI systems getting hacked or breached, and less than half trust governments or companies to protect personal data used by their AI systems. As autonomy increases, this concern becomes more than just a background worry. The more AI can do, the more security comes into focus. Leaders must consider what could happen if systems are compromised and how quickly that damage could scale.

People’s other major concern is control. They worry that decisions made by AI won’t reflect their personal values or priorities; seven in 10 agree that human oversight remains essential. This is not a rejection of AI but a request for agency and an ability to understand what the system is doing, intervene when it matters and opt out when stakes rise.

Accountability
55%
Worry that organizations will fail to comply with their own AI policies or relevant government regulations.

Six in 10 people worry about organizations failing to hold themselves accountable for AI use that leads to negative consequences. Almost as many (55%) worry that organizations will fail to comply with their own AI policies or relevant government regulations.

Some organizations are responding by making their AI commitments more explicit. For example, Tokio Marine Holdings, an insurance multinational, adopted a public AI policy as part of using AI for claims processing, fraud detection and for other customer service activities, to earn trust in the highly regulated insurance industry. Released in 2025, the policy states that the company will not rely on AI data or techniques that could introduce biases against any group, and that AI-based functions and decisions will always keep people in the loop. It further states that the company will prevent AI use that violates human rights, distributes false or biased information, or leads to leaks, falsifications or unauthorized use of personal information.

An overwhelming majority of respondents agree that AI outputs should clearly show when something is created or influenced by AI. Nearly three quarters worry that, as generative AI (GenAI) becomes more widespread, they will no longer be able to tell what is real or fake.

“The blurring of reality is one of the most underestimated AI risks,” says Katherine Boiciuc, EY Chief Technology and Innovation Officer for the Oceania region. “When people can’t easily tell what’s real, what’s generated, or who’s behind a decision, it creates a low-grade anxiety that erodes confidence — even when the technology itself is working as intended.”

Respondents also want AI tools to include parental controls and age limits and believe that countries and regions should have stronger rules governing organizational use. Across all three market types — Pioneer, Transitional and Lagging — people believe more safeguards should be enacted.


“What surprises me isn’t that people want stronger AI safeguards — it’s how consistent that expectation is across markets,” says Sarah Liang, EY Global Responsible AI Leader. “Regardless of where people live or how advanced AI adoption is, they’re asking for clearer rules, stronger accountability and visible protections.”

AI capability is advancing rapidly, while governance and accountability structures are clearly struggling to keep pace. As people increasingly delegate decisions to AI through everyday use — oversight through policy, regulation and organizational control is evolving more slowly. Organizations must therefore make their promises tangible by building systems that feel understandable, controllable and aligned with human needs. Trust can’t only live in principles, it has to show up in the experience of using AI.

What surprises me isn’t that people want stronger AI safeguards — it’s how consistent that expectation is across markets.

Human-centered design has become one of the most powerful, yet under-used mechanisms that organizations have to close the trust gap.

Male software developer presenting the Mobile Application Architecture Diagram on the large screen to developer team in meeting room.
4

Chapter 4

Designing AI that people trust

The more responsibility AI takes on, the more people’s trust in it will be shaped by how it behaves in moments that matter.

As AI systems become more advanced, it's clear that building trust doesn’t happen through policies, principles or technical assurances alone. It is built, or lost, through experience. What people see, understand and feel when they use AI to make or influence decisions increasingly determines whether they are willing to try it again.

In this sense, design is becoming the front line of AI governance. It’s where real people experience safeguards, accountability and control. Thoughtful design becomes the bridge between capability and comfort. When designed well, AI provides reassurance and clarity. When designed poorly, it does the opposite.

“The most trusted experiences expand autonomy gradually,” explains Peter Neufeld, EY Studio+ EMEIA Financial Services Digital Customer Experience Leader, “giving users confidence first and then expanding what the system is allowed to do. Designing for comfort is about pacing — people don’t resist autonomy, they resist losing control.”

The EY Human Signals report (pdf) explores the key attributes of successful AI experiences:

  • Emotional support: People looking for help care more about the emotion or complexity of the moment than the product they’re using.
  • Simplicity: Good design reduces cognitive load by making an experience as effortless as possible — breaking down complex tasks and clarifying next steps.
  • Proof and traceability: Superior design incorporates summaries, transcripts and logs so people feel like they are in control of the interaction, and to reduce misinterpretation.
  • Adjustment: It gives people the control they want over how “warm” or “neutral” an AI tool feels. If needed, it provides instant access to interacting with a human instead of a bot.
  • Inclusivity and accessibility: It has conversational interfaces that support people with different levels of literacy, language or accessibility.
  • Visible constraints: Trust comes from clearly showing what the system will not do, and giving users control over those boundaries. Auditability, override and human escalation are not a backup plan but a core feature of accountability.

A recent EY report on creating trustworthy AI-powered experiences (pdf) highlights how deeply design can come down to language, pacing and clarity. In the report, Miki Van Cleave, Chief Design Officer for Chase, the consumer and community bank at JPMorganChase, emphasizes the amount of care that must go into shaping AI-driven interactions. “We are here to build trust and start from a place of understanding the consumer mindset,” Van Cleave says. “We spend an inordinate amount of time thinking about the language that customers need to hear so that they are crystal clear on what they’re doing. For example, with fraud detection, we will spend hours, days and weeks dissecting every noun, verb and adjective on an interstitial page to make sure it helps customers know what to do.”

 

The need for clarity is not cosmetic. It determines whether people feel in control, particularly in high stakes contexts such as finance.

 

Leaders will not decide whether AI becomes more autonomous. That trajectory is already in motion. What they can decide is how deliberately that autonomy is introduced — where it is constrained, where it is expanded and how confidence is built along the way.

 

The most successful organizations will not be those that move fastest to introduce AI everywhere, or those that take a wait and see approach. They will use design to set a deliberate pace — to accelerate where trust and value already exist, and slow down where clarity, safeguards or confidence are still needed.

Diverse group of business professionals listening attentively as a businesswoman leads a discussion on artificial intelligence design at a modern conference
5

Chapter 5

Implications for business leaders

Setting the pace of autonomy

The future of AI is not being shaped by abstract debate or long‑range forecasts. It is being shaped by everyday choices — by when, where and how people decide to let AI act on their behalf.

The findings from this year’s survey make one thing clear: AI adoption is moving faster than sentiment. A growing minority of people is already delegating decisions to AI, while many more are building confidence through everyday, low‑risk uses. In some markets, this progression is already well underway, offering an early view of how quickly expectations can shift once familiarity and value align.

For business leaders, current market dynamics create both an opportunity and a responsibility.

The opportunity lies in recognizing that autonomy is not arriving all at once. It is expanding through specific moments, users and markets, often faster than organizations expect, but not without pattern or signal. The responsibility lies in understanding those signals early, learning from where confidence is already forming, and responding deliberately rather than reactively.

Crucially, this is not a choice between speed and caution. The data shows that people are not waiting for perfect trust before moving forward. They are negotiating confidence in motion — continuing to use and, in some cases, delegate to AI while simultaneously asking for clearer safeguards, stronger accountability and greater transparency.

It’s put leaders in a new position. The question is no longer whether autonomous AI will scale. It is whether organizations will shape scaling intentionally by accelerating where trust and value already exist, and slowing where clarity, safeguards or confidence are still needed.

What should leaders do next?

  • Lead with experience, not promises: Build trust through use. Prioritize AI applications that deliver clear, everyday value and allow people to review, override or opt out as confidence grows.
  • Segment by AI readiness, not demographics: People and markets are progressing at different speeds. Design experiences, messaging and controls that meet users where they are, rather than assuming a single path to adoption.
  • Make trust visible: Providing clear disclosures, strong data protections, human‑in‑the‑loop options, and transparent accountability is no longer a differentiator. It is a prerequisite as autonomy increases.
  • Design for emotional context: As AI takes on higher‑stakes tasks, design that considers empathy, simplicity and pacing is as important as performance. Design is not a finishing touch; it is a strategic catalyst for trust and adoption.

Leaders have a choice. They can allow delegation to expand by default, shaped by external forces and early adopters. Or they can design it deliberately, with visible constraints, clear accountability and confidence built in from the start.

Those who choose to act deliberately will do more than keep pace with change. They will earn a durable license to lead in an increasingly autonomous world.

The authors would like to thank Katherine Boiciuc, Oceania Chief Technology and Innovation Officer, Ernst & Young Australia; Peter Neufeld, EY Studio+ EMEIA Financial Services Digital Customer Experience Leader, Ernst & Young LLP, Krista Walpole, Associate Director, EYGS LLP; AnnMarie Pino, Associate Director, Ernst & Young LLP; Gaurav Batra, Associate Director, Ernst & Young LLP and Harshil Milan Zatakia, Supervising Associate, Ernst & Young LLP for their contributions to this article.


Summary

Everyday behavior shows that AI adoption is advancing faster than public confidence. While people often voice concern about trust and control, they continue to integrate AI into daily decisions, gradually relying on it for more consequential tasks. Familiar, low-effort uses are normalizing AI's presence and laying the foundation for greater delegation. As a result, decision-making authority is beginning to shift from people to systems, often before organizations have fully understood where acceptance is forming or how quickly expectations are changing. 

Related content

Megatrends 2026 and beyond

EY’s Futures Reimagined framework helps leaders navigate the NAVI era’s megatrends for resilience and growth. Discover how.

How can responsible AI bridge the gap between investment and impact?

Explore the ways in which responsible AI converts investment into meaningful impact.

Government: Leading into Tomorrow

In this podcast series from EY, senior public sector leaders reveal how they’re delivering on the promise of a better post-pandemic future. Learn more.

    About this article

    Authors

    Contributors