Athlete jumping over a hurdle on a track with digital motion analysis overlay in the background.

AI needs a control tower before it takes off

AI is rising fast, but without governance, leaders risk flying blind. A control tower builds trust, safety, and scalable impact.


In brief:

  • AI adoption is accelerating, but governance gaps risk trust, safety and compliance.
  • Only 17% of Australian organisations have fully integrated AI with robust controls.
  • EY’s Responsible AI and ServiceNow’s AI Control Tower offer visibility and risk management at scale.

AI isn’t ready for lift-off without a control tower.

Artificial intelligence is being rolled out across Australia and New Zealand, but consumers are twice as concerned as C-suite leaders about how it’s being used.

Without a complete 360-degree view of every AI asset and every risk, leaders are flying blind. Governance can’t be bolted on after take-off.


Bridging the trust gap

Earlier this year, EY surveyed nearly 1,000 C-suite leaders with responsibility for AI at organisations earning over US$1billion annually across 21 countries.

In Australia, just 17% of organisations have fully integrated AI. Most are in the ‘almost there’ phase, where governance and robust controls are still lining up with business ambition.

In fact, 67% of Australian C-suite leaders told us they find it challenging to develop governance frameworks for current AI technologies.

And more than half (53%) admit their approach to technology-related risks is insufficient to address the next wave of AI.

Maturity of AI deployment

Global

Australia

Fully integrated and scaled AI solutions are in place

31%

17%

AI solutions are integrated into most initiatives and are being refined

41%

43%

AI solutions are in the process of being integrated, aligned to a strategic plan

27%

40%

Pilot AI projects or proofs-of-concept are being conducted to validate feasibility

1%

0%


The boardroom tension: Speed versus stewardship

  • The executive view: Your leadership team has a dozen AI pilots proving their worth. The pressure to go live is intense.
  • The board view: Directors want assurance. Where’s the risk register? How do we know the AI isn’t misrepresenting facts, introducing bias or breaching regulations? How will we track value and risk over time?

Both perspectives are right. Moving too slowly risks losing market share. Moving too fast risks public trust, regulatory penalties and reputational damage.

 

In reality, leaders’ familiarity with the risks is patchy. In some cases, just one in five leaders is moderately or extremely familiar with the risks of the technology they are already deploying, or plan to within a year.

 

Organisations are committing to rollouts without having the controls, policies or readiness to manage the risks.


Integration sharpens awareness

Leaders in organisations with fully integrated AI report higher concern levels across almost all Responsible AI principles compared to those in the “mostly integrated” or “in process” stages.

What does this tell us? It suggests experience breeds realism.

Leaders who have navigated the complexity of full integration have a clearer view of the scope and seriousness of governance challenges. Early-stage adopters may risk blind spots and underestimate the governance required until scale exposes the gaps.

Chart 1

EY Responsible AI: Nine principles, one goal

EY’s Responsible AI framework gives leaders a decision-making compass to scale AI with confidence.

Chart 2
  1. Accountability: Assign unambiguous, transparent and documented ownership over AI systems, impacts and outputs, with named people responsible at every stage.
  2. Data protection: Use data legally, ethically and securely, protecting privacy and confidentiality at all times.
  3. Reliability: Align AI systems with stakeholder expectations and perform with precision and consistency.
  4. Security: Secure AI systems from unauthorised access and enable fast recovery if something goes wrong.
  5. Transparency: Disclose how AI is designed, what it’s used for and its limitations.
  6. Explainability: Make AI decisions understandable so they can be validated and challenged by human operators.
  7. Fairness: Design and use AI to avoid bias and promote a positive and inclusive society.
  8. Compliance: Meet all relevant laws, regulations and professional standards.
  9. Sustainability: Consider AI’s environmental impact throughout the lifecycle.
AI isn’t like other technology rollouts. It’s not a ‘set and forget’ exercise. Governance and controls must evolve alongside every new AI capability. Maintaining trust means continuously upskilling on those risks – and clearly showing how those risks are being managed.

Built on global standards

The detailed definitions of these principles have been crafted and validated against leading ethical AI frameworks and standards. These include the US National Institute of Standards and Technology, the International Organization for Standardization, the Organisation for Economic Co-operation and Development, the European Union’s expert group on AI, and EY’s own research and tools.

Read more about EY’s Responsible AI framework


EY Responsible AI: A practical framework

EY’s Responsible AI framework turns governance into a growth enabler. When we work with clients, we guide them through a nine-step process that builds accountability, trust and readiness into AI strategy.

  1. Set the vision: Define what AI means for your organisation and establish a clear strategy.
  2. Define the governance model: Establish oversight, risk management and decision-making processes.
  3. Determine the risk framework: Set your boundaries and assess enterprise-level AI risks.
  4. Create an AI inventory: Catalogue every model, dataset and tool (in-house and third-party).
  5. Establish policies: Develop clear standards, procedures and compliance requirements.
  6. Shape the operating model: Assign roles, responsibilities and resource requirements.
  7. Monitor performance and compliance: Track outcomes, risks and regulatory compliance against clear KPIs.
  8. Enable and train your teams: Provide training, resources and support to embed Responsible AI.
  9. Reassess: Review performance, resolve issues and feed lessons back into governance.

The gap between AI deployment and risk management is the leading indicator of what we call ‘velocity loss’. When acceleration collides with missing controls, innovation stalls, resources are wasted and competitors gain ground.
EY has invested over 600,000 hours in AI training – and from that, we’ve learned that standardised, responsible AI isn’t optional. Neither are trusted partners. Go it alone, and you have an experiment. Work together, and you have impact at scale.

ServiceNow AI Control Tower

To operationalise the EY Responsible AI framework, EY partners with ServiceNow to deploy the AI Control Tower.

Think of the AI Control Tower as the central command centre for every AI application in your business – a single place to see, govern and optimise AI.

We can help you understand what AI is running, as well as where, how and at what level of risk.

Chart 3

AI mission control: An AI asset inventory

  • Track every AI model, dataset, system and agent, whether built in-house, bought from a vendor, or embedded in SaaS tools.
  • Link each AI asset to the business services it supports, so you know what it’s doing, where it’s running and the risk it carries.

Governance, risk and compliance built in

  • Embed AI-specific risk management frameworks (like the NIST AI Risk Management Framework and the European Union Artificial Intelligence Act) directly into workflows.
  • Run impact assessments, monitor for emerging risks, track incidents, and prove compliance to regulators and boards.

Strategic portfolio management for AI

  • Align AI projects to business objectives, prioritise investments and model expected value.
  • Measure ROI, cost savings, time saved or customer experience gains to prove AI is delivering results.

Cross-functional collaboration

  • Connect IT, Risk, Legal, Compliance and business teams on one platform.
  • Move AI from isolated proof-of-concepts into organisation-wide deployment with shared visibility and accountability.

Listen. Act. Communicate.

What can leaders do next? Our research, experience and on-the-ground expertise suggest three moves you can make now.

  1. Listen: Expose your C-suite to the voice of the customer

    AI decisions aren’t just for market-facing executives. Your CIO, CTO, CRO – everyone in the C-suite – needs a direct line to customer concerns and expectations. Break down silos. Pair back-office leaders with customer-facing peers. Put them in the room for focus groups, surveys and feedback sessions. Work together to connect the dots so every decision is informed by the customer’s voice.

  2. Act: Integrate responsible AI at every stage

    Responsible AI must be central to the AI development and innovation process, from early ideation to deployment. Go beyond compliance. Identify the real risks your customers and other stakeholders care about and address them early. Stay ahead of new challenges by upskilling your teams, tapping external expertise, and keeping pace with emerging models and their impacts.

  3. Communicate: Showcase your responsible AI practices

    Customers won’t use AI they don’t trust. That’s both a risk and an opportunity. Lead with transparency. Show how your AI is fair, safe and accountable. Make Responsible AI part of your brand story, and you are more likely to stand out from competitors and win loyalty in the process.

Underpinning all three: adopt technology that enables the process. Platforms like ServiceNow AI Control Tower give leaders the visibility and control needed to manage, govern and scale AI responsibly.


See it. Steer it. Scale it.

Our journey to the AI Control Tower started with the same challenges our customers are now facing: balancing the speed of innovation with the discipline of responsible control. Instead of applying uniform controls to all AI initiatives, the AI Control Tower calibrates oversight based on the specific risk profile of each use case. Lower-risk innovations can proceed with appropriate velocity, while higher-risk initiatives receive more rigorous controls. Effective AI management must function like air traffic control: maximum throughput with absolute safety, driven by centralised visibility and distributed action. The AI Control Tower creates managed complexity rather than imposed simplicity. It orchestrates workflows, verifies compliance and measures performance through one system.
AI success isn’t measured by the number of proofs-of-concept you launch. It’s by the confidence to scale. Leaders need a 360-degree view of every AI asset, the risks it carries and the value it delivers. With EY’s governance expertise and the ServiceNow AI Control Tower, leaders can move fast without losing sight of safety, compliance and long-term value. A control tower is only as powerful as the governance behind it. EY brings the frameworks, oversight and integration discipline that turn ServiceNow’s technology into a trusted hub for AI. Every project visible, every risk managed, and every opportunity aligned to strategy.

Lead with confidence

EY and ServiceNow bring together world-class governance, proven frameworks and market-leading technology so you can scale AI responsibly.

Connect with Chee Kong Wong.

EY Regional ServiceNow Consulting Leader, Oceania

Summary

AI is advancing rapidly, but many organisations lack the oversight needed to manage its risks effectively. Leaders face pressure to scale quickly, yet governance frameworks often trail behind. EY and ServiceNow offer a solution through a centralised system that maps AI usage, monitors risk and aligns innovation with accountability. Their approach helps organisations move beyond experimentation, enabling responsible deployment at scale. By integrating ethical principles and operational controls, businesses can build trust, meet regulatory expectations, and unlock long-term value. This strategy empowers teams to collaborate across functions, and steer AI with clarity and confidence.

About this article

Contributors

Related articles

AI Education: Empowering Oceania’s Future Workforce

Discover how AI education can unlock career potential across Oceania. Learn why leaders must act now to build an AI-ready workforce.

What’s holding us back on AI? The invisible value exchange

Explore how trust, transparency, and human oversight can unlock AI adoption in Oceania. Read EY’s insights and lead the change in your organisation.

How responsible AI can unlock your competitive edge

Discover how closing the AI confidence gap can boost adoption and create competitive edge. Explore three key actions for responsible AI leadership.