Trust in AI won’t come from capability – it will come from clarity and empathy.
One in 10 people globally now socialise with AI – chatting with a virtual friend, confiding in a chatbot or engaging with a digital companion. But in Australia and Aotearoa New Zealand, the mood is different. We’re more sceptical, more wary, more anxious.
According to the EY AI Sentiment Index, only 37% of Australians and 28% of New Zealanders believe the benefits of AI outweigh the negatives.
Whether it’s misleading information, security risks or social engineering, Australians and New Zealanders are more concerned than their global counterparts. They also think AI is making us less intelligent (AU 53%, NZ 42%).
When we asked people to share their experiences with AI, words like “dehumanising,” “invasive,” and “annoying” were common. “We as humans are becoming less and less useful,” one person noted.
What we have is not a technology problem. It’s an emotional problem.
What we can’t see, we don’t trust
Most people don’t realise how deeply AI is woven into our daily lives – from the voice that answers your questions, to the filters that sort your inbox, to the chatbot that handles your customer service query. It’s everywhere, but it’s often operating behind the scenes. And that’s the problem. When technology is hidden, it feels unaccountable. What we can’t see, we don’t trust.
Globally, excitement for AI is soaring – 84% of respondents in India and 83% in China say they are excited about the future of AI and what it means for them. Here, only 38% of Australians and 33% of New Zealanders share that optimism.
Despite the hesitation, AI is being embedded into every corner of business. Whether people are ready or not, it’s happening. That’s why we need leaders who bring emotional intelligence to the table – leaders who help people adapt, not just expect them to keep up.
And just as importantly, we need AI systems that are transparent and referenceable. People need to understand how decisions are made, where outputs come from, and what data is driving them. Trust isn’t built on complexity – it’s built on clarity. If we want AI to be accepted in our part of the world, it must be explainable, visible, and human-led.
Leadership in the AI era
Leaders helping their teams adapt will need technical understanding and strategic insight. They’ll also need human relationship skills and change management abilities. They must build their technical literacy to oversee AI implementation while maintaining high EQ to support the transformation. This is human-centred leadership.
Since 2022, EY teams have been working in collaboration with the with the Saïd Business School, University of Oxford, to define six characteristics of human-centred leadership – and they apply here.
- Lead with empathy: Listen without judgment. Host open Q&A sessions where employees can voice concerns about AI. Acknowledge what you know and what you don’t.
- Inspire with vision: Use stories, not statistics, to explain how AI creates value. Make your vision clear, relatable and empowering.
- Care with insight: Understand that experiences with AI vary. Show people that their concerns are heard.
- Empower with autonomy: Set clear rules, roles and responsibilities for AI adoption. Show people the efficiency opportunities and communicate how you will and will not use AI. Give teams the freedom to experiment without fear of failure.
- Build with integrity: Balance innovation with ethical responsibility. Establish clear guidelines and ensure AI use is transparent and accountable.
- Collaborate with clarity: Make AI a shared journey. Encourage teams to share ideas, co-create solutions and build confidence together.
The world is embracing AI. Leaders in Australia and New Zealand must close the trust gap – not with technology, but with empathy.