The AI debate has moved on. In 2025, the technology crossed a tipping point: models became powerful, affordable and widely accessible. From that moment, AI stopped being a technical challenge and became a leadership one. At recent AI roundtables and panels, a single message cut through the noise: in 2026, the biggest risk for leaders isn’t getting AI wrong, it’s doing nothing at all.
With that in mind, here are 10 things I won’t be ignoring this year.
1. The state of your data
As one panellist put it, “there’s no AI without data.” Too many organisations are still trying to scale AI on bad data foundations – and wondering why it doesn’t work. Most organisations don’t yet have the right data infrastructure for AI at scale. If your data strategy isn’t explicit, funded and owned at the top of your organisation, your AI strategy is just a slide deck, meaning it’s far from a new capability.
2. Context as a competitive advantage
The question is rarely “is the model smart enough?” It’s “have we given it enough context?” We heard about how it doesn’t really matter which large model you choose – what matters is the quality, depth and freshness of the organisational context you wrap around it.
In practice, that means unifying knowledge, process, policy and customer data so AI can reason about your business, not just the public internet.
3. Human and AI operating models
In leading organisations, AI agents are fast becoming teammates. We’re seeing humans move into AI control tower roles – managing exceptions, guardrails and outcomes – while agents handle the repeatable work, 24/7.
Completely new functions like “agent operations” are fast emerging, so if your operating model is still designed for a pre-AI world, you need to move quickly to update it or you will be left behind.
4. Guardrails as innovation accelerators
There’s a lingering myth that governance slows AI down. In reality, the opposite is true. When you set clear guardrails, experimentation actually increases because people know where the boundaries are.
Responsible AI policies, testing regimes, and independent “trust but verify” checks will become standard practice. Leaders who get ahead of this will move faster with more confidence.
5. Sovereign AI and AI islands
Sovereign AI is no longer a niche topic. Data, infrastructure and even AI talent increasingly need to reside within specific areas.
We’re moving from global, single instance AI to a world of AI islands – regionalised models and stacks. That has implications for architecture, vendor strategy and where you build capability.
6. AI native competitors and structural advantage
AI native businesses are operating with economics we haven’t seen before: hundreds of millions in revenue with only a few hundred people, thanks to extreme leverage from AI.
As one guest put it, some incumbents “won’t have a business in two years’ time” if they wait. The question for every leadership team is: are we prepared to compete with those unit economics?
7. Skills, mindset and microcredentialling
The organisations making real progress are treating AI capability like fitness. I’ve seen examples where people complete mandatory AI and cyber training, gain back several hours a week in productivity, then voluntarily opt into dozens more hours of learning because it unlocks access to more powerful tools.
Microcredentialling, hands-on-labs and continuous learning now need to be a core part of your workforce strategy.
8. Physical AI, smart currency and quantum
While generic AI startups are becoming mainstream, investment is surging into three next-wave spaces: physical AI and robotics, smart currency and tokenisation, and even quantum – particularly for security and cryptography.
Leaders don’t need to be deep technologists, but they do need a clear point of view on how these will intersect with their sector over the next 3 to 5 years.
9. Mis/disinformation and AI safety
The existential risks that keep many experts up at night are often mundane. AI systems that are indistinguishable from humans online. The collapse of proof of effort. Citizens unknowingly exposing sensitive data to powerful agents.
As one speaker put it, we’ve effectively created “billions of wannabe software developers” without the security training that used to be a prerequisite.
10. The cost of inaction
Finally, we heard a stark statistic: around half of organisations still don’t have a clear AI plan, while a large majority of workers are already using AI tools informally.
That gap is unsafe. For boards and executives, the bigger risk is not moving or sidelining AI while AI native competitors quietly reshape your industry.
If you’re a CEO, chair or executive reading this, three questions I’d take into your next meeting are:
- Where is AI already creating measurable value in our organisation – and where is it still stuck in innovation theatre?
- Who on our executive team is truly accountable for our AI operating model, data foundations and guardrails?
- In three years’ time, what will we regret ignoring in 2026?
Because the one thing none of us can afford to ignore now is the cost of waiting.