Woman standing in tunnel looking forward

How can strong leadership on AI be the key to responsible adoption?

Nordic leaders face a pivotal moment to transform AI confidence into strategy and value through stronger leadership and clearer ownership.


In brief
  • Despite high confidence among Nordic CxOs, a clear gap remains between perceived AI readiness and actual governance maturity. 
  • Just 26% of Nordic CEOs are involved in emerging technology strategy, compared to 49% globally — pointing to fragmented accountability and potential risks.
  • AI is a strategic imperative for all organizations and requires direct CEO leadership and accountability across the C-suite.

As artificial intelligence (AI) continues to scale across enterprises, Nordic leaders face a pivotal moment — not only to accelerate adoption, but to lead with responsibility and purpose. Drawing on insights from the Responsible AI Pulse Survey 2025, this article explores how Nordic organizations are dealing with the ethical, cultural and governance challenges of AI at a time when confidence, transparency and accountability are more critical than ever.

As AI adoption accelerates, governance is also moving up the agenda — driven in part by growing regulatory momentum, including the EU AI Act. Yet questions persist: How consistently are responsible and ethical principles being applied in practice, and who within the enterprise truly owns the AI agenda? In the Nordics, where innovation and confidence go hand in hand, organizations are well positioned to take the lead — and clarifying accountability will be key to unlocking their full AI potential. Yet, more than half of Nordic companies — 53% — report struggling to assign clear accountability for emerging AI technologies. 

The Responsible AI Pulse Survey 2025 reinforces this urgency, revealing a significant gap between confidence and actual preparedness. With AI becoming central to business strategy, leadership oversight must evolve in parallel.

1

Chapter

AI landscape among Nordic CxOs

Nordic leaders are optimistic about AI, yet a values-led, cautious approach defines how adoption unfolds.

Across the Nordics, technology has always been met with enthusiasm, curiosity and a deep respect for societal value. These nations lead global digitalization indices and consistently rank high for innovation and digital infrastructure. Yet, as AI begins to influence everything from public services to boardroom decision-making, a more intricate challenge is taking shape.

Seventy-five percent of Nordic CxOs say they have already integrated AI into all or most initiatives, reflecting a strong commitment to embedding AI at the heart of business transformation. Sweden stands out in their AI maturity, leading the region with 87% of Swedish CxOs having already integrated AI into most initiatives across the organization.

Businesses in the region are investing in future-proofing their workforce, with 61% of companies actively providing AI-related training — with Sweden leading at 77%, according to the Responsible AI Pulse Survey 2025. The focus remains on aligning emerging technologies with societal values such as confidence, transparency and inclusion. Nordic leaders share the global optimism around AI, recognizing it as a transformative force that can enhance productivity, enable smarter decision-making, and drive competitive differentiation.

While many see AI as a force for innovation and transformation, they are also acutely aware of its potential risks. The top concerns include generating unreliable outputs, security breaches and failing to protect data privacy. This focus likely reflects longstanding awareness within risk management, where cybersecurity and privacy have been critical issues for over a decade. Nordic leaders are therefore responding to AI risks through the lens of these familiar challenges, aware of both the potential impact and recent incidents related to data protection and system vulnerabilities.

Interestingly, 67% of Swedish CxOs report low concern about AI-generated misinformation. This may reflect higher levels of confidence in institutions, technology suppliers, or the technology itself. It could also relate to how AI is used today — if it’s applied in non-critical or non-autonomous processes, then the quality of the output may not feel as important.

 

However, this confidence stands in contrast to the more cautious sentiment seen elsewhere in the Nordics, where concerns about misinformation, data security and privacy are more pronounced. It’s also worth noting that no national institutions currently follow up on these issues, and a lot of the regulations that do exist are not yet fully effective.

 

These differences between Nordic countries highlight varying levels of AI maturity and public trust. They also raise questions about how to reconcile conflicting data between key studies. Despite these differences, there is a shared understanding across the Nordics: AI requires stronger governance, clearer accountability and a commitment to public trust. As organizations move from experimentation into applying AI in more critical processes, the risks will grow — making robust governance, explainability, bias mitigation and regulatory alignment more necessary than ever for realizing AI’s full potential.

2

Chapter

Navigating the complexities of AI accountability

With strong confidence and a supportive culture, Nordic leaders have a unique opportunity to lead in AI governance and create long-term value.

The Responsible AI Pulse Survey 2025 found that 74% of Nordic CxOs believe their AI controls are moderate to strong. However, when measured against the EY responsible AI framework's nine core principles for safe and ethical AI, the data tells a difference story: On average organizations only have strong controls in three out of nine facets.

At the same time, 50% of Nordic companies are still grappling with governance challenges related to current AI technologies. This reveals a worrying gap between perceived readiness and actual governance maturity. Leaders are eager to demonstrate preparedness, but the operational mechanisms to support ethical, compliant and explainable AI appear to be underdeveloped. 

Despite these varied perceptions, one shared concern remains: how to align AI development with public expectations. Globally, CxOs appear significantly more confident than consumers when it comes to risks associated with AI. According to the EY AI Sentiment Index and Global Responsible AI Pulse Survey, nearly two-thirds of CxOs (63%) believe they are well aligned with consumer expectations. Yet consumer concerns — particularly around privacy, misinformation and explainability — are often twice as strong as those of executives. This confidence gap is not unique to any one region; it reflects global misalignment that is equally relevant in the Nordic context, where societal confidence is a core cultural value. The challenge for Nordic leaders is clear: to match their governance frameworks with meaningful public engagement and transparent communication.Shape, Picture, Picture

AI ownership is everyone’s job — and no one’s mandate 

According to the EY Reimagining Industry Futures Study 2025, only 26% of Nordic CEOs are actively involved in shaping their organization’s emerging technology strategy — far below the global average of 49%. At the same time, the Responsible AI Pulse Survey 2025 reveals that Nordic CEOs express the greatest concerns about AI risks compared to the rest of the C-suite, and are least likely to say their organizations have strong controls in place to govern AI responsibly. This paradox — of concern without ownership — can have strategic consequences. This sentiment doesn’t just apply to the CEO. In fact, more than half of Nordic companies — 53%  — have difficulty assigning accountability for emerging AI technologies.  

Case study

How Preem accelerated its journey toward AI adoption

Sweden’s largest fuel company sought out EY wavespace™ to understand how it can embed AI across their business.

19 Feb 2025

    There may be a cultural explanation for this phenomenon. Nordic organizations are renowned for their flat hierarchies and empowered teams. Employees at all levels feel confident to make decisions. While this encourages agility and inclusion, it also means that emerging technology strategies can become everyone’s task — and therefore no one’s clear mandate.

     

    Without common governance principles, AI risk is often fragmented, and organizations may find themselves unprepared for regulatory frameworks like the EU AI Act. For the Nordics, where compliance is a reputational strength, the absence of clear AI governance poses more than just an operational challenge — it risks undermining the region’s credibility on the global stage. This is particularly relevant for several Nordic-headquartered consumer brands that have built their identities around transparency, sustainability and strong corporate governance.

     

    For these organizations, AI is not just a digital tool — it's a reflection of brand values. Failing to embed governance into AI systems could erode the very confidence they've worked hard to earn. To remain future fit, these companies must ensure that their AI strategies align with ethical frameworks, regulatory demands and measurable impacts.

     

    This disconnect between executive concern and active leadership involvement represents a significant missed opportunity. For a technology poised to fundamentally reshape how organizations operate — from decision-making to value creation — it is no longer sufficient for CEOs to remain on the sidelines. Their engagement must go beyond rhetoric and into the core of strategic planning. In the absence of direct CEO accountability, efforts to govern AI ethically and deploy it effectively risk becoming fragmented or reactive, rather than proactive and transformative.

     

    Currently, most AI use cases in the Nordics remain low-stakes and experimental — focused on automating emails, summarizing documents, or enhancing internal workflows. While these are useful starting points, they barely scratch the surface of AI’s potential. To fully unlock its transformative power, Nordic organizations must embed AI thinking into their long-term strategies and risk frameworks. This will require revisiting governance models, rethinking accountability, and elevating AI from a technological initiative to a boardroom priority — one that is actively shaped by the very leaders tasked with futureproofing the enterprise.

    3

    Chapter

    People-first transformation to build confidence in AI

    Nordic firms embed a people-first AI strategy to boost confidence, scale adoption, and align with ethical values.

    Nordic organizations are not just adopting AI — they are reimagining how transformation happens when humans are placed at the center. This people-first mindset is now being applied to AI, where building confidence, transparency and inclusion are not just values — they are strategic enablers.

    While challenges remain, Nordic organizations are already taking tangible steps forward in this space. Sixty-one percent of Nordic companies are investing in employee upskilling to mitigate risks of emerging AI technologies, with Sweden leading the pack at 77% — both above the global average. But beyond training, organizations are embedding responsible AI into the fabric of their transformation strategies. In Sweden, companies are building internal AI councils and ethics boards. In Finland, public-private partnerships with universities are embedding responsible AI thinking into the national curriculum. 

    This shift is emphasizes that AI transformation must be immersive, inclusive, and insights driven. Change is no longer a top-down directive — it’s a dialogue. Organizations that create space for employees to engage with AI, share feedback, and co-create solutions are seeing faster adoption, stronger alignment and more sustainable outcomes.

    At the same time, AI is increasingly tied to broader ESG strategies and environmental impact. Across the region, this is coming to life through use cases such as driving carbon reductions leveraging smart grid optimization and mapping land use leveraging AI to help with sustainable farming. While only 25% of Nordic CxOs are highly concerned about the environmental cost from the use of AI, this is an area that is expected to grow — especially in a region that focuses on technology as a catalyst to serve society.

    The commitment to leveraging AI for positive impact is not just a goal in this region — it’s an operational reality. But Nordic organizations need to embed ethical decision-making into AI workflows today to become the confident companies of tomorrow.

    From readiness to responsibility

    Nordic companies are poised to lead in building sustainable confidence in AI. The answer lies in cultivating new leadership capabilities — ones that embrace radical uncertainty, foster psychological safety, and champion ethical and transparent decision-making at scale. It’s crucial they view responsible AI not as a compliance burden, but as a brand differentiator — and a futureproofing necessity.

    To truly scale confidence in responsible AI, Nordic companies must:

    1. Elevate leadership and accountability: AI is no longer something solely for the IT department — and CEOs need to take an active role in how their responsible AI strategy is shaped. Executive ownership across the C-suite and the whole business needs to be established to support alignment, accountability and prioritization of AI initiatives and AI governance throughout the organization.

    2. Democratize fluency: Empowering employees with the necessary skills is essential for successful AI adoption. By upskilling the workforce, organizations can build a culture of AI literacy and preparedness across all levels.

    3. Operationalize governance: AI governance is not a “one-and-done” effort but must be built and consistently monitored with the evolution of the organization and new emerging AI technologies. Companies should embed responsible AI principles into workflows, not just policies, to drive AI initiatives that are robust, ethical and scalable.

    In the Nordics, “AI for good” is not a slogan — it’s a strategy. And the organizations that translate intent into infrastructure will be the ones that earn and sustain public trust.


    Summary 

    Nordic organizations are at a critical juncture where confidence in AI must translate into strategic action and long-term value. While adoption is high, gaps remain in governance, accountability and leadership involvement. To fully realize AI’s potential, leaders must align technological ambition with clear ownership, ethical principles and public trust. By doing so, the Nordics have the opportunity to lead with a model of AI that is transparent, inclusive and built to last.

    About this article

    Authors

    You are visiting EY nordics (en)
    nordics en