Light painting

Why AI-native utilities will not look like utilities

Success hinges on reinventing traditional models with AI-first principles and the intelligent orchestration of humans and machines.


In brief:

  • The utility of the future will be defined by intelligence, not infrastructure.
  • AI is not just accelerating utility operations but also reinventing them.
  • To be future-ready, utilities must become “AI-native” operators.

The utility sector is entering an era where volatility becomes the norm and intelligence becomes the defining source of stability. Across Southeast Asia, electrification is accelerating rapidly. Electricity demand has grown by more than 60% over the past decade.1 The region is also projected to account for 25% of global energy demand growth to  2035.2 Distributed generation is scaling rapidly, with renewables expected to increase their share of the power mix across Southeast Asia from 25% in 2024 to 28% by 2030.3 As these forces converge, the traditional model of infrastructure stewardship begins to collapse under the weight of new expectations that will rewrite the rules of the energy system over the next decade.

 

Customer expectations are also undergoing a profound shift. EY research shows that 77% of global energy consumers want their energy provider to offer low-cost options alongside premium services. Sixty-seven percent say they cannot absorb even a 10% increase in their energy bill. This indicates that customer expectations are growing around comfort, cost certainty and personalized energy choices, increasing pressure on utilities to deliver seamless, flexible and data-driven experiences. Beyond end consumers, communities want greater transparency and pace. Investors want clarity and precision of information.

 

The power grids that served consumers in the previous century are no longer equipped to meet demand for this century and beyond. Utilities are transitioning from managing physical assets to orchestrating complex ecosystems. They no longer operate in a world defined by mechanical predictability but one driven by continuous sensing, fluid coordination and system-level intelligence.

 

This shift places unprecedented demands on decision-making. Human-only processes, designed for an era of stability and slow-moving signals, cannot keep pace with the speed or complexity of what lies ahead. Artificial intelligence (AI) must now play a foundational role: not as an additional layer on top of existing systems but as the core logic through which the energy system operates.

Changing nature of transformation with AI

Many utilities have already cycled through waves of automation, digitalization and process improvement. While these changes deliver meaningful efficiency gains, they do not alter the enterprise’s underlying operating rhythm.

 

AI is different because it reshapes the nature of work itself. AI shifts the focus to outcomes. It displaces the notion of a fixed process and replaces it with systems capable of interpreting context, simulating alternatives and determining the next action. It transforms the relationship between human judgment and operational execution, reducing the cognitive burden on people and recasting their contribution toward oversight, arbitration and strategic direction.

 

Unlike previous transformations, which unfolded in large, punctuated cycles, AI introduces a continuous mode of adaptation. The organization becomes a learning organism, constantly refining its models, adjusting its assumptions and evolving its operating practices as conditions shift. This result is not simply a more digital utility but an entirely different kind of utility.

Behind the curve despite being AI-enabled

The AI shift is not happening in isolation. According to the EY 2025 Work Reimagined Survey, the global AI race is already well underway: 88% of employees globally are using AI at work, yet only 5% qualify as advanced users. Such users combine multiple AI tools to run complex tasks and decision flows. This gap is critical. As employees gain fluency and confidence with AI, the proportion of advanced users will rise sharply, increasing expectations for systems that can ingest, interpret and act on intelligence at scale. Utilities cannot rely on incremental enablement when their future workforces will be operating in AI-augmented environments by default.


Furthermore, the pace of agentic AI innovation is accelerating much faster than traditional enterprise planning cycles. By 2028, more than 1.3 billion AI agents are expected to be in operation4, highlighting the accelerating pace of agentic AI transformation and the shrinking window for enterprises to adapt.

Many utilities describe themselves as “AI-enabled”. They deploy pilots, experiment with use cases and add analytical tools. But such framing reveals a deeper constraint: the processes, structures and governance frameworks surrounding those tools remain rooted in an operating model designed for a different era.

 

Being “AI-enabled” is fundamentally “tool-centered”, which means intelligence merely exists beside the work to enhance tasks. In contrast, being “AI-native” is “operating model-centered”, which means intelligence runs through the work to operate a system. This distinction is subtle in language but profound in practice: AI-enabled utilities improve, whereas AI-native utilities transform.

An AI-native utility is not one that simply deploys more AI. It reimagines processes with embedded intelligence, organizes work around outcomes, positions humans as orchestrators and uses governance to regulate autonomy across humans and digital “polyagents”.

 

This is the challenge that Southeast Asian utilities now face: moving to systemic intelligence.

AI-native operating model an emerging paradigm

The AI-native operating model is not a futuristic vision; it is an emerging paradigm shaped by five interdependent shifts where the organization becomes a dynamic network rather than a static chart.

1. Processes evolve into learning systems

AI-native processes are dynamic. They sense conditions, test hypotheses, generate options and learn from the outcomes. Instead of waiting for updated plans or periodic studies, the system continuously refines itself. For power utilities, planning becomes an ongoing dialogue between humans and AI agents. Outage management becomes a proactive system of anticipation, mitigation and optimized restoration. Distributed energy resource integration evolves into ongoing orchestration across thousands of distributed assets. The process is no longer linear. It becomes an insight-driven dialogue.

2. People become adjudicators

Human expertise remains indispensable, but its purpose changes. People no longer assemble data, check assumptions or manually run models. Instead, they interpret the intelligence produced by AI agents, validate context, navigate ethical and operational boundaries and adjudicate the trade-offs that define system performance. Critically, humans will no longer use AI capabilities just to execute tasks on command. They will continually engage in dialogue and co-reasoning with digital agents. In a “no collar” environment, this partnership becomes the norm: autonomous agents handle execution, while traditional white- and blue-collar roles provide the judgment that keeps the system aligned, accountable and strategically grounded. This shift is fundamental. The most valuable human skill becomes sense-making.

3. Digital becoming an intelligent operating fabric

Intelligence cannot scale in silos. AI-native utilities rely on a shared fabric of models, data semantics, policy engines and agent orchestration layers that connect planning, operations, field work, customer experience and markets.

Some intelligence is built directly into the core IT and operational technology platforms, redefining how processes and core technologies operate. Other AI capabilities remain modular and extensible to support rapid experimentation. What separates leaders from laggards is not choosing one approach over the other but blending both — deploying intelligence at scale across value chains where it creates the most impact.

The strength of the architecture lies not in uniformity but in deliberate design, deciding what must be stable and what must remain fluid. A balanced AI-native architecture would typically include the following:

  • Standard AI capabilities inside core systems like enterprise resource planning, enterprise asset management, customer relationship management and asset data management systems for predictability and safety
  • Flexible AI agents at the edge that will reimagine workflows, explore new insights and orchestrate interactions across data sources, platforms and teams
4. Governance setting the nature of autonomy

AI‑native governance no longer monitors tasks; it governs behavior. It defines when agents may act, when they must defer to human judgment, how conflicting recommendations are resolved and how learning is managed over time. It embeds operational guardrails directly into platforms so that autonomy unfolds inside clearly defined limits. AI‑native governance must account not only for risk but also value and outcomes by aligning agent behavior with the organization’s goals. Humans remain fully in control, not through manual intervention but through the rules they set.

5. Hierarchical structures becoming multimorphic

The most significant evolution happens in how people and agents work together. Rigid, hierarchical structures give way to multimorphic pods: cross-functional teams whose composition shifts dynamically based on required outcomes.

These pods blend human expertise, AI assistants and autonomous agents within a flatter structure. They work across traditional boundaries, guided by shared intelligence and focused on shared outcomes. Routine activities become almost entirely automated. Work becomes more fluid, more interdisciplinary and more strategic. In this context, the pod becomes the atomic unit of value creation. It is the structure through which the AI-native utility operates.

Moving from ambition to operating reality

Becoming AI-native requires a coherent shift across capabilities, interactions and intelligence. Utilities must build enterprise-wide intelligence engines instead of isolated use cases. They must turn workflows into human-agent interactions rather than sequences of tasks. And the focus must shift from managing data to managing the intelligence of the entire ecosystem.

When these changes take hold, the organization will move from incremental improvement to continuous reinvention.

Utilities face a choice: iterate the past or redesign for the future. The scale of change ahead demands first-principles thinking. Utilities must ask fundamental questions:

  • What outcomes define success?
  • Which decisions must humans make and what can be entrusted to agents?
  • What intelligence must be embedded in the fabric of the organization?

AI does not simply accelerate outcomes for the utility; it redefines the utility. In Southeast Asia and beyond, the leaders of the next decade will be those that act now to shape a more resilient, affordable and sustainable energy future.

This article was authored with contributions from Associate Partner Lynette Oo, Senior Manager Yu Ting Khoo and Manager Cyndee Yap of Business Consulting at Ernst & Young Consulting Sdn. Bhd.


About this article