Embedding safe and reliable AI at scale also means institutionalising governance as part of product and operations lifecycles. This will enable rapid experimentation without sacrificing resilience, reliability or trust. Companies that succeed will mitigate regulatory and reputational risk and prevent operational failures that compromise growth and customer outcomes.
Opportunity 4: Rethink pricing strategy for the agentic era
AI-native companies are redefining how software is priced, packaged and purchased. The rise of agentic-mediated buying is transforming customer engagement, as traditional subscription and consumption models give way to secure APIs, instant trials and outcome-based pricing. Customers are starting to expect frictionless experiences and transparent value, not just access or usage.
Outcome-based pricing is emerging as the preferred approach to address changing customer expectations and navigate macroeconomic pressures. The most recent EY – Oxford Economics Global Technology Industry AI Survey found that 89% of tech CEOs are exploring innovative pricing models, including outcome-based pricing. But exploration alone won’t be enough. In 2026, organisations must move to meaningful deployment by tying pricing directly to delivered outcomes and measurable value.
In this shift, those technology companies that possess well‑structured data and can clearly demonstrate outcomes will hold a distinct competitive advantage as they are uniquely positioned to prove value transparently and consistently.
Opportunity 5: Optimise for flexibility in model selection
Organisations are faced with new strategic decisions as they weigh up the trade-offs between the transparency, customisation and lower costs offered by open AI models versus the performance, support, and integrated safety promised by closed models. Making the right choice can become a source of competitive advantage.
The open model ecosystem offers lower barriers to entry, faster iteration, and the potential for deep integration into proprietary workflows, often at a fraction of the cost of the closed alternative. Closed models, meanwhile, continue to set benchmarks for capability and reliability, but may come with higher costs, vendor lock-in, and less flexibility for localisation or compliance in different geographies and jurisdictions.
This is not just a technical debate; it’s also a global business and policy issue. Nor is it a simple either-or choice. In jurisdictions where access to proprietary models or infrastructure is restricted, open models enable broader adoption and innovation. The commercial opportunity lies in adopting a flexible strategy that balances price and performance, avoids single-vendor dependency and aligns with the regulatory requirements of different markets. Organisations that can adopt both open and closed models and deploy them as compliance and other requirements dictate will be well positioned to capture value, manage risk and adapt to changing conditions.
Opportunity 6: Design sovereignty by default and run a borderless talent model
Sovereign and local AI processing is becoming standard as governments around the world tighten data residency and compliance mandates. Countries are asserting control over infrastructure and shaping AI to align with local priorities. While regulations such as the European Union’s Digital Markets Act (DMA), Digital Services Act (DSA) and AI Act are impacting companies’ plans, sovereignty now stretches far beyond compliance. It spans where employees live, where compute happens, and how foundational models reflect national values, morals and traditions.
For technology leaders, sovereignty presents both a technical and organisational challenge. Architectures must embed local jurisdictional controls from the outset. This affects cost and scalability.
It will also force the reimagination of talent strategies. Visa restrictions and other local issues complicate mobility at the same time as innovation demands ever greater global collaboration.
Success in this environment means institutionalising sovereignty-by-default — embedding regional controls into workflows and infrastructure planning while adopting a borderless talent model that leverages distributed engineering pods and regional skills hubs to avoid local restrictions. Companies that integrate diverse regional perspectives and regulatory requirements into their strategy will achieve compliance without sacrificing speed, enabling global scale in an increasingly fragmented landscape.
Opportunity 7: Embed technical expertise to address platform complexity
As AI platforms and ecosystems become more complex, embedding technical talent directly into business units or project teams can accelerate adoption and improve service delivery quality as platforms evolve. Such talent is in short supply, however. According to the EY – Oxford Economics Global Technology Industry AI Survey, 27% of tech executives say a lack of AI skills is the primary barrier to greater implementation across the company, more than any other technical or operational challenge.