Back angle photograph of an AI programmer debugging algorithms for image processing

Is AI at risk of becoming your next governance failure?


Related topics

Find out how businesses can unlock the value of their AI investments through strong governance and risk management.


In brief

  • AI spend is rising fast, but most initiatives fail due to fragmented execution, weak governance and underestimated enterprise-wide risks.
  • AI must be managed as a strategic transformation, with executive ownership, integrated risk management and consistent enterprise standards.
  • Safeguarded AI programs, supported by multidisciplinary oversight, significantly improve trust, control and the likelihood of value realization.

Artificial Intelligence is no longer a future trend; it is a present reality that has moved decisively from experimentation to a board-level priority. Organizations across industries are committing substantial capital – often tens of millions of dollars across multiple initiatives – to AI as a lever for efficiency, innovation and growth. Gartner and IDC forecast global spending on AI infrastructure and services to reach USD 144.8 billion by 2028.1 Yet, despite the scale of investment, the return on AI remains highly uncertain. Industry estimates suggest that only a small fraction of AI initiatives deliver material business impact, while many introduce new risks, including ethical concerns, data exposure and regulatory scrutiny.

This growing disconnect between investment and outcomes highlights a fundamental governance challenge. AI cannot be approached as a series of disconnected pilots or technology-led experiments. It requires the same rigor, oversight and accountability as any other enterprise-wide transformation. To protect shareholder value and realize sustainable returns, AI investments must be embedded within a structured strategic transformation program – one with clear executive ownership, robust governance and integrated risk management at its core.

The AI investment reality

As companies seek to adopt and drive AI performance, the investments made are often fragmented and use-case driven. Organizations typically fund isolated initiatives – such as customer service chatbots, back-office automation or predictive analytics in supply chains – without integrating them into a coherent enterprise strategy. These efforts are often executed in silos, with limited coordination across business units and no unifying transformation framework to align technology choices, governance and strategic business objectives.

 

This piecemeal approach masks the true nature of AI transformation. AI initiatives are inherently complex, cutting across multiple functions, data domains and organizational layers. They rely on advanced models that function as opaque “black boxes,” introducing dependencies and risks that are difficult to assess – even for technical experts. Without a program-level structure, organizations underestimate both the scale of the change and the level of oversight required to manage it effectively.

AI initiatives rely on advanced models that function as opaque “black boxes”, introducing dependencies and risks that are difficult to assess.

This fragmented approach exposes organizations to a broad range of risks that are often underestimated at the outset.

  • Strategy and governance: Many organizations lack a clearly articulated AI strategy and a defensible business case. In the absence of strong executive ownership and board-level oversight, AI initiatives remain disconnected from enterprise priorities, limiting both their strategic impact and their ability to scale.
  • People and change readiness: Organizations are frequently underprepared to adopt and sustain AI at scale. Cross-functional teams face persistent skills gaps, resistance to new ways of working and insufficient cultural readiness – factors that can stall or derail even the most promising initiatives.
  • Technology and operational risk: Underlying infrastructure is often not designed to support scalable AI. The transition from pilot to deployment introduces significant integration complexity, security vulnerabilities and operational instability, particularly when core systems are not AI-ready.
  • Data and model risk: Many organizations struggle with data quality, traceability and privacy. Without robust governance and monitoring, models may degrade over time, embed bias or fail to meet regulatory expectations – creating legal and reputational exposure.
  • Customer and ethical impact: AI systems that lack transparency, fairness or explainability can quickly erode customer trust and damage brand reputation, especially when ethical considerations are not addressed proactively.
Fewer than
4%
of AI initiatives deliver their intended impact

The result is predictably poor outcomes and missed returns. A significant proportion of AI initiatives fail to deliver material value – some estimates suggest that fewer than 4% achieve their intended impact2 – while others introduce risks such as biased decisions, security vulnerabilities or regulatory exposure. Siloed execution further compounds the problem: duplicated efforts, inconsistent standards and fragmented controls increase complexity and elevate enterprise risk. Without a programmatic, end-to-end view, organizations struggle to scale AI responsibly or capture enterprise-wide value from their investments.

This reality underscores the imperative for strong governance and operational discipline. AI investments of this magnitude require active executive ownership and sustained board-level oversight to ensure accountability and value realization. Organizations must adopt a coordinated, enterprise-wide approach – one that aligns AI initiatives to strategy, enforces common standards and manages risk centrally. This is the critical bridge between initial enthusiasm for AI and durable, scalable business impact.

Safeguarding AI-driven transformation

Not only do AI investments need to be embedded within a structured strategic transformation program, it is equally essential to safeguard such programs.

A robust, well-governed AI transformation is dependent on confidence and transparency in program performance. Boards, executive teams and program leaders require timely insight into the most critical risks in order to anticipate issues, support informed decision-making at every stage of the AI journey and increase the likelihood of achieving intended outcomes.

In practice, this often involves independent external assurance and advisory capabilities operating throughout the AI transformation, reporting directly to program leadership and executive sponsors, while working in close collaboration with delivery teams.

In our experience, effective approaches are supported through a scalable set of mechanisms applied across the AI program lifecycle, including:

Value-add of safeguarded AI transformations

Organizations that adopt this kind of structured, enterprise-level safeguarding typically achieve:

  • Clear visibility into material AI risks across every stage of the AI journey, helping executives identify, prioritize and address potential threats early – before they escalate into financial, regulatory or reputational incidents.
  • Greater organizational AI governance maturity including keener awareness of and stronger capabilities in ethical deployment, regulatory compliance and infrastructure readiness, enabling teams to make informed decisions about when and how to deploy AI effectively.
  • A higher likelihood of achieving intended outcomes, including return on investment, as AI investments are continuously aligned to strategy, risk appetite and execution capacity rather than pursued in isolation.

Delivering AI at enterprise scale requires more than isolated expertise or point solutions. It demands coordinated insight across technology, regulation, ethics, operating models and change management applied consistently over time. For many organizations, partnering with an external provider that can offer this multidisciplinary support from a single source, while maintaining independence and a strong governance lens, can prove a critical enabler of confidence, control and sustained value realization.


Summary

While global spending on AI infrastructure and services is high and rising, the return on AI is uncertain, with many projects failing to deliver the anticipated business impact and value. Treating AI as an enterprise‑wide transformation—and safeguarding it through robust governance and integrated risk management—is essential to ensure value generation and sustainable scale.

Acknowledgement

Many thanks to Fanny de Prémorel and Myriam Hadir for their valuable contribution to this article.



FAQs

Related articles

    Explore how EY can help you with Risk management

    Discover how leading risk management practices create value and a competitive advantage by embracing disruption with trust and confidence.

    Back angle photograph of an AI programmer debugging algorithms for image processing

    About this article

    Request for proposal (RFP) - exclusively for Switzerland

    |

    Submit your request now!