Smiling man running in mid-air

How a license to lead can transform human potential in an AI world

Related topics

Realizing the benefits of AI requires leaders to build confidence and agency — not just technology.


In brief:

  • The EY AI Sentiment Index Study shows most people (82%) are already using AI to improve how they live and work, but just 57% feel comfortable with it.
  • AI’s potential excites people as much as it worries them. Leaders must tap into this enthusiasm while addressing real concerns.
  • Great leaders help people engage with AI. When it’s relevant, intuitive and human, it’s transformative.

Artificial intelligence (AI) has become an integral part of the way we live and work. The AI Sentiment Index Study, a global survey of over 15,000 people, shows that 82% of respondents had consciously used AI in the past six months. And many will have used or relied on AI without even realizing it. This is not just a technology revolution — it’s a human one. AI is changing what people can achieve.

However, there is an adoption gap — a space between how much people are willing to use AI and how much they actually do. That is caused by concerns around trust, privacy and control, but it’s also about what’s currently available. Better AI tools matter, but it’s equally important to make sure people want to use them, and that they see the value. Closing this gap is a significant opportunity for organizations.

This is where leadership is critical. Organizations that actively create confidence around AI, demonstrate its benefits, and empower people to engage on their own terms will put themselves in the strongest position — not just to implement AI, but to shape its role in business and society.

We call this the “license to lead.” Organizations can earn and grow their license by using AI in ways that align with human needs and expectations, while enhancing human potential rather than diminishing it.

AI opportunity Venn diagram

This could be more of a challenge than many leaders imagine. As Laurence Buchanan, EY Global Customer and Growth Leader, says: “While AI is advancing rapidly, trust in many of the organizations hoping to shape its future remains low. Trust in AI is not just about the technology itself — it’s about whether people are confident that organizations will use it in ways that serve them.”

This report explores where adoption gaps exist, what it takes to close them, and how organizations can create a license to lead, so they are best placed to benefit from AI now and to shape its future.

How a license to lead can transform human potential in an AI world

Young Asian family using smart speaker to play music while having breakfast at the table. Smart home concept.
1

Chapter 1

AI is reshaping our daily lives

Understanding how people feel about AI today reveals insights into its future.

People’s engagement with AI today reflects a focus on practicality. Most are not interested in AI in itself; they want to know how it can help them meet existing goals. Today, the most common uses are for straightforward, efficiency-driven tasks. Some applications are highly specific, such as managing electrical consumption, while others are more general and widely applicable, like learning about a topic or summarizing information. When AI provides immediate tangible value, people are interested.

But AI adoption depends on confidence as much as it does on functionality, and there are clear boundaries to where people feel comfortable adopting AI today. More complex systems, tasks requiring personal data, or emotionally engaged interactions remain less commonly used. These applications often demand a greater level of confidence or user engagement, or they require people to use technologies in ways they don’t understand.

Overall, these boundaries — which we explore in more detail later — will shift as AI evolves. It’s important to track them, so business leaders can make decisions based on where people are now and where they are heading, not where they used to be. Leaders who assume AI adoption will follow a simple trajectory will miss the deeper reality: as AI becomes more powerful, it needs to become more trusted and more intuitive.

This is what the AI Sentiment Index does, by quantifying global levels of comfort with AI. Today, the global index score is 68 out of 100. 

Those who are most comfortable with AI are significantly more engaged — on average, they’ve used 15 different AI applications in the past six months, compared to six among those who feel neutral and just three among those who remain uncomfortable. The data highlights a reinforcing effect: those who feel comfortable with AI tend to explore more applications, gradually increasing their confidence and usage.


These early adopters provide a glimpse into AI’s future: they’re not only more accepting of AI-powered recommendations and automation but also more likely to appreciate AI-driven customer experiences, such as chatbots, and even social interactions.

Growing adoption is about making sure the next wave of users feel empowered, not left behind. People lean toward AI when they understand it, and they understand it best when they have the chance to try it. As AI continues to embed itself into people’s lives, those who recognize the differences in how people engage with it — what excites them, what holds them back — will have the clearest view of where AI is heading next. Organizations that cultivate confidence — by creating safe opportunities for people to explore AI — will be best positioned to accelerate adoption and shape AI’s role in society.

Key question: How are you designing AI experiences that align with real human needs, rather than assuming adoption will happen on its own?

Mature couple using laptop, online shopping, agreement, e-commerce
2

Chapter 2

Attitudes about AI are deeply personal

Understanding AI sentiment across six key personas highlights both opportunities and risks.

AI adoption and sentiment are not uniform around the world. Demographic factors like age, education and geography play an important role in how people are relating to AI. But psychographics — how people think, what they value, and their emotional response to technology — are just as critical. The AI Sentiment Index reveals significant global variation in all these areas, highlighting both opportunities and risks for businesses. AI is not a one-size-fits-all story — it’s a deeply personal, context-driven experience.

At a national level, AI sentiment varies widely. Skepticism remains more pronounced in France (51), New Zealand (52), and the UK (54), which sit at the lower end of the Index. Countries like India and China are leading the way, with sentiment scores of 88, reflecting optimism and deep AI integration into daily life. These differences reflect more than just policy or infrastructure — they reveal how different societies are internalizing AI’s role in their futures.

To better understand these variations, we identified six distinct AI sentiment personas. These provide a useful way of mapping global differences in AI engagement — from those who are most excited to those who remain deeply skeptical.

  1. Cautious optimists: Welcome AI’s potential while remaining mindful of risks.
  2. Unworried socialites: Embrace AI’s benefits with few reservations.
  3. Tech champions: Frequently use AI and see long-term benefits but still advocate for regulation.
  4. Hesitant mainstreamers: Express concerns about data privacy and transparency but recognize the benefits AI could bring to society.
  5. Passive bystanders: Express concerns about misinformation and maintain an ambivalent attitude toward AI’s adoption and impact.
  6. AI rejectors: Resist AI altogether, prioritizing human connection and advocating for strict regulations.


Our study highlights a fundamental reality: Discomfort with AI does not mean disengagement. People are still finding ways to use it, even as they question its broader implications. For example, hesitant mainstreamers worry about data privacy, but 76% agree AI makes it easier to complete technical or academic tasks. Even passive bystanders, who engage less frequently with AI, still interact with it in some form.

People with concerns about AI still recognize its benefits — apart from those who reject it outright. They find ways to engage with AI where they see clear value. For some organizations, this is a moment of strategic choice: Do you see AI hesitation as a barrier, or as an opportunity to build familiarity and confidence? Can you address concerns, while recognizing and building on the desire people have to embrace AI?

For businesses and governments alike, these personas provide a powerful lens for understanding where the opportunities are. They also show the value of helping people feel empowered to use AI, rather than convincing them to do so. Organizations that recognize this will not just drive adoption, they will shape AI’s role in society. “AI won’t be widely embraced just because it exists,” says Raj Sharma, EY Global Managing Partner of Growth and Innovation. “What matters is helping people cross the threshold from curiosity to confident engagement.”

Key question: What are you doing to make AI feel tangible, useful, and relevant to the people you serve — whether employees, customers, or citizens?

Roller Coaster, Salou, Spain.
3

Chapter 3

AI must support people, not diminish them

What do people trust AI to do, and where do they draw the line?

Across multiple areas of life and work, people’s openness to AI is higher than their current level of engagement. This is the AI adoption gap. It represents a significant opportunity, and closing it is about more than access to technology.


AI use today is concentrated in certain key areas. It’s highest in customer experience (CX), with 31% using AI to access customer support, and in personal applications like content translation (29%). Yet even in sectors where AI adoption is lower — such as energy or financial services — the AI Sentiment Index shows that people are open to AI playing a role.

Many are likely already using services or experiences driven by AI, but without realizing it. As Sameer Gupta, EY North America Financial Services Organization Advanced Analytics Leader, notes: “It’s encouraging that the research shows people have a high degree of comfort with financial institutions using AI to protect against fraud. But with few respondents seemingly aware of the extent to which AI is embedded in fraud prevention processes, there is an opportunity for financial institutions to better educate customers on how AI is already deployed for their benefit.”

Some of the most promising AI applications validated by our study align with areas where businesses are actively developing solutions. These include:

  • Media and entertainment: Personalized content recommendations
  • Technology: Managing smart devices
  • Retail: Accessing customer support
  • Health: Diagnosing symptoms
  • Financial services: AI-driven financial wellness

In developing these solutions, it’s valuable to note this finding from our study: Agency is as important as privacy. People are more comfortable with AI in monitoring and preventative applications and become wary when AI handles personal data or makes decisions on their behalf. For example, they are relaxed about AI monitoring that keeps a vehicle up-to-date with maintenance or prevents shoplifting, but they become highly uncomfortable with AI monitoring that’s trying to improve shopping experiences or recommend ways of making employees more efficient. Only four in 10 people feel comfortable with AI being used to monitor employees for efficiency, analyze resumes for hiring, or assess employee performance. Even younger generations — who tend to be more comfortable with AI in general — remain hesitant when it comes to AI’s role in workplace decision-making.

Openness to AI declines even further when the technology is used to make decisions that humans would normally make. While 60% are comfortable with AI preventing crime, only 45% are comfortable with AI making legal decisions. In health care, 57% support AI predicting health issues, but only 37% trust AI as a medical practitioner.

Discomfort is about the role people play in AI-driven systems, not the technology itself. The fear isn’t so much about AI replacing people, it’s about AI diminishing the value of people thinking critically, making choices, and having autonomy. That’s why bridging the adoption gap requires not just technological advancements, but a nuanced approach that aligns AI’s evolution with real human concerns and expectations. It also requires a mindset shift — not “How do we convince people to use AI?” but “How do we create the conditions where people want to use AI?”

“Closing the AI adoption gap requires more than advanced technology — it demands that leaders build genuine trust and create meaningful opportunities for human engagement,” says Matt Barrington, EY Americas Chief Technology Officer. “As AI agents emerge, successful adoption hinges on how well people are empowered to embrace AI confidently and integrate it into their lives and work in ways that feel intuitive and valuable.”


Empowerment is critical. Are people learning to use AI because they fear being left behind, or because they see how it makes them better at what they do? Organizations need to create space for people to explore AI on their own terms — safe opportunities to experiment, learn and build confidence. Adoption doesn’t happen through reassurance; it happens through experience. AI is a social change, and like any social change, it will take root when people feel ownership of it.

Key questions:

  • How are you creating opportunities for people to explore and engage with AI in ways that are meaningful to them?
  • What steps are you taking to bridge the trust deficit, so that people feel AI is working in their best interests? What are you doing to create safe, low-risk opportunities for people to play, experiment and develop real confidence with AI?
Over the shoulder view of young woman using virtual assistance on smartphone to track fridge inventory, suggest recipes and handle grocery delivery. Smart home and smart living.
4

Chapter 4

Control or agency — what really matters?

The way leaders design AI systems will determine whether AI improves human decision-making or erodes it.

If people feel more confident with AI, what might become possible? How ready are people to let AI take a more autonomous role in their lives? Beyond simple automation, AI can take proactive steps to assist, predict and personalize experiences. The technology is evolving rapidly, but it still largely operates at a broad, reactive level. So, AI serves up ads based on past searches, recommends content based on general interests, and nudges people toward familiar consumer choices. It doesn’t yet anticipate real-life context or deeply understand individual needs in a meaningful way. The next frontier isn’t just AI that reacts, but AI that truly aligns with human intent and aspirations.

Could an AI agent be trusted to order groceries based on what it knows about someone’s schedule, tastes, health goals, and what’s already in their kitchen — without human input? Would people be comfortable with AI making high-stakes, personalized decisions on their behalf? And if so, who would they trust with the personal data needed to make that happen?

The data suggests that while people are open to AI playing a greater role, boundaries around decision-making exist. The majority are comfortable with agentic AI predicting emergency situations (64%) or protecting against fraud (63%). But even in areas where AI could improve efficiency, such as evaluating insurance or fraud claims, comfort levels remain moderate at 46%.

People still want humans in control over decisions that shape their lives. They are reluctant to let AI fully replace human judgment in high-stakes personal interactions. And while AI-driven personalization is widely used today, just 41% are comfortable with companies using their personal data and past behaviors to make tailored product or service recommendations.

This is the “social paradox” of AI adoption: many people enjoy interacting with AI and see its benefits, yet at the same time, they fear it eroding human agency, decision-making, and connection.

AI is an opportunity to transform what’s possible, to embrace new perspectives on the synergy between people and technology.

Yet in some areas people are already accepting AI’s ability to make complex, real-time decisions. For example, modern autos are full of technologies to help people drive better. In our study, 54% of respondents would be comfortable with AI optimizing their navigation or driving. Services like Waymo One, which now offers fully autonomous ride-hailing in major US cities, show that AI-powered driving isn’t just theoretical — it’s already on the roads. Cities like Los Angeles are using AI to analyze traffic patterns and optimize traffic light timings, reducing congestion.

The same principle is emerging in B2B applications, where manufacturers and retailers are using AI for touchless automated ordering and supply chain digital twins, anticipating demand and responding dynamically. In these examples, and others like them, AI is enabling people to make more strategic, high-value contributions.

AI is also reshaping what we might think of as uniquely human forms of interaction. For example, 72% of people comfortable with AI in our study believe talking with AI can help some people develop better social skills, and 54% say chatting with an AI companion can be as enjoyable as talking to a human. Among our six personas, 30% of cautious optimists and unworried socialites say they have formed an emotional connection with AI in the last six months.

At its best, AI is not just an impersonal machine or a functional tool — it’s an enabler, helping people connect, learn and create. The opportunity lies in ensuring AI supports human connection rather than replacing it, helping people to feel more confident in using AI while creating meaningful ways for people to interact, learn and grow. “AI is an opportunity to transform what’s possible, to embrace new perspectives on the synergy between people and technology,” believes Hanne Jesca Bax, EY Global Vice Chair - Markets. “By focusing on distinct human qualities, such as empathy and ethical judgment, and how this can improve machine capabilities, leaders will demonstrate that it doesn’t need to be humans or AI, but both. Creating an environment where individuals feel more secure in exploring AI to boost their effectiveness is key.”

This is where an AI-first mindset becomes critical. Organizations that focus only on AI’s capabilities — without considering the humans who engage with it — will struggle to drive adoption. Success depends on making AI intuitive, empowering, and embedded in ways that amplify human agency rather than undermine it.

Key questions:

  • How are you ensuring AI strengthens human agency — helping people do more, not just overseeing the machine?
  • How are you positioning AI as a tool for enhancing human connection, rather than one that risks replacing it?
People with varying abilities working and meeting together in a small office
5

Chapter 5

License to lead: Do you have what it takes?

AI’s future will be shaped by leaders who build confidence, empower people, and create a bold vision of AI as a tool for human potential.

While many people are enthusiastic about — or at least open to — a greater role for AI in their lives, this confidence might prove fragile. Even among those who feel comfortable with AI, concerns remain — particularly around misinformation, data privacy and the need for clear human oversight. Three quarters of respondents (75%) worry about AI-generated false information being taken seriously, 67% fear AI will become uncontrollable without human oversight, and 64% are concerned about AI training on personal data without consent.


The trust deficit is not just a risk — it’s a defining strategic challenge. Across industries, people are uncertain whether businesses will manage AI in ways that truly serve them. Even in technology, where AI innovation is most advanced, trust sits at just 49%. Financial services (42%), health care (47%), and consumer goods (44%) show similar patterns. Government (39%) and media (38%) — two areas critical to AI’s role in public life — are even lower, reinforcing concerns about AI’s impact on information integrity and governance. For leaders, the question is no longer, “Will people trust AI?” but “How will we earn and sustain their confidence at scale?”


This is what leadership in AI truly means. It’s not just about implementing the technology but about shaping a future where AI expands human potential. The organizations that succeed will be those that recognize AI’s real power lies not in automation, but in augmentation — elevating what people can achieve rather than replacing them.

“Leaders must prioritize practices like keeping humans in the loop at key points,” says Joe Depa, EY Global Chief Innovation Officer. “Including rigorous testing to identify and mitigate biases, and developing robust safeguards against misuse, all while preparing our workforce for the next generation of work.”

Organizations that succeed in AI will be those that balance innovation with responsibility. Addressing fears around misinformation, bias and privacy is a prerequisite for adoption. This means proactively tackling concerns rather than reacting to criticism, and committing to clear oversight, transparency and ethical AI practices — not just in principle, but in execution. But true leadership requires more than just mitigating risk — it requires a bold vision of AI as a catalyst for human ingenuity, imagination, and progress.

This is your license to lead. Not because AI demands new governance models, but because it presents a once-in-a-generation opportunity to transform what’s possible. The organizations that lead in AI won’t be those that build the best tools — they will be the ones that empower people to do their best work, to think bigger, to create more, to solve harder problems. The future of AI is not about the technology itself — it’s about what humans will achieve with it.

Key questions:

  • What are you doing to build an AI-first mindset within your organization, making AI feel like a natural, empowering part of work?
  • What bold vision are you setting for how AI will transform what’s possible for people, not just processes — and how does that shape your license to lead?
  • How are you ensuring AI enables creativity, problem-solving and human ingenuity, rather than just improving efficiency?
  • How are you helping your workforce develop the habits, mindsets and skills that will allow them to thrive in an AI-powered world?

Summary

AI has become a fundamental part of daily life, yet gaps in trust and engagement persist. Organizations that bridge these gaps — by making AI intuitive, relevant and empowering — will lead the way. Success depends on more than technology; it requires a bold vision for how AI can expand human potential. Leaders who create confidence, enable exploration and embed AI in meaningful ways won’t just drive adoption — they’ll define AI’s role in shaping the future. The opportunity is clear: The time to lead is now.

Related articles

How can CMOs win the balancing act created by generative AI?

Explore how CMOs can leverage GenAI for marketing success and create impactful customer experiences.

Rethink customer experience as human experience

Technology will transform customer experience. Competitive differentiation will come from connecting to what is essentially human. Learn more.

    About this article

    Authors