Close-up of clicking on computer screen

How should boards respond when their AI has a ‘heartbeat’

AI is transitioning from answering questions to acting on its own, creating new risks and forcing organisations to rethink control and governance.


In brief

  • AI agents can now operate continuously and make small errors grow quickly without human oversight.
  • The real risk could come from chains of actions, not a single output, as autonomous systems use tools and act independently.
  • Boards must help redesign governance to contain cascading errors, audit trails, create fast kill switches, and map out incident response to preserve trust.

Your AI is no longer waiting to be asked and humans are now merely “welcome to observe.” It’s early February 2026 and the AI story has already moved on. If 2025 was the year of copilots, 2026 is shaping up to be the year of autopilots.

Moltbook has made agent‑to‑agent behaviour visible in public, showing how AI systems can interact with one another. OpenClaw has made the underlying mechanisms clearer, demonstrating how agents can “wake up” on their own through heartbeat‑driven processes that keep running without constant prompting. Together, these shifts point to a transition from AI that simply answers to AI that operates.

Most leaders still think of AI as episodic: you ask, it answers, and the interaction ends when the session does. Heartbeat-enabled systems behave differently. This is a structural change.

Some AI systems are now built to wake themselves, reassess their environment, determine what requires attention, and continue operating without waiting for human instruction. They can check for updates, reprioritise what matters, and carry on working without being asked. In practice, the system no longer just responds — it persists.

Once AI persists, risk shifts from the possibility of an incorrect sentence to the possibility of an incorrect action, repeated across tools and workflows before anyone realises intervention is needed. Persistence reshapes the risk model entirely.

Absolute autonomy could pose serious threats to enterprise security and expose organisations to cyberattacks, operational disruption, and data breaches.

This is why boards must treat this moment not as another feature upgrade but as an operating model change. The question for boards is no longer how to supervise systems that answer. It is whether they are ready for systems that decide and act. A responsible AI framework can help identify risk early and build trust at the core. Use the strategic window to shape the future now. 

The new AI risk frontier

In the last wave of AI adoption, board conversations focused on outputs, including hallucinations, bias, misinformation, and data leakage through text. In a heartbeat-enabled AI world, the primary risk moves from what a model says to what a system does. And the danger isn’t just one‑off mistakes anymore, but patterns of behaviour that build up quietly until they become serious. When systems act at machine speed, trust becomes something you engineer because customers see the impact first and only get the explanation afterward.

The technology that enables this change is now widely available. The model works inside a system that can browse the internet, read information, use tools, and complete tasks step by step. That means the real “intent” comes from the sequence of actions and the permissions the system has, not from a single prompt.

In short, governance should focus on what the system is allowed to do, how fast it can act, and what safeguards are in place if something goes wrong.

A simple scenario could bring this to life.

Imagine an organisation is testing an internal AI assistant / agent that sorts emails, writes drafts, and updates a case management system. Because it has a “heartbeat,” it keeps checking for urgent tasks and continues working on its own.

Now add a weak link to it: the agent also reads content from outside sources, like a supplier’s webpage or a customer email with a link. Now that could expose the agent to risk. The real danger isn’t about someone hacking the AI directly, it’s about someone slipping harmful instructions into the content that the agent is allowed to read. It’s more about manipulating the ecosystem around it.

These instructions may look like ordinary text, yet they are designed to steer the agent into unsafe actions that it is technically “allowed” to take. These could be actions such as exporting sensitive information, sharing it too widely, or sending an email that shouldn’t go out.

This could lead the agent to attach the wrong customer file or send a draft externally before it’s been reviewed. And because the agent runs on its own and has wide-ranging permissions, a single small mistake can quickly snowball into a series of bigger problems. When an agent is designed to keep operating and can do many things without supervision, the risk isn’t just one wrong action. It’s a chain of repeated errors that grows before anyone notices.

That is why responding fast is crucial. Slow reactions can damage trust, especially when the system works at machine speed.

Three risks boards need to understand now

This is what “engineered resilience” looks like. It means designing AI systems so that they have built‑in limits, safeguards, and evidence trails, rather than trying to add these controls only after something goes wrong.

Five controls boards should ask management to show within the next 90 days, with a named executive

  • Set clear permission levels for AI agents
    Sort agents into categories — from simple, read‑only helpers to those that can take high‑risk actions. Ensure important actions such as payments, data exports, external emails or system changes require strict approval.

  • Limit what agents are allowed to do
    Give each agent only the minimum access it needs. Where possible, make access time‑limited and ensure agents cannot grant themselves extra permissions.

  • Reduce how much damage an error can cause
    Control how fast agents can make changes and limit how many changes they can make at once. Block any increase in privileges or scope unless it is formally approved.

  • Assume outside content can be unsafe. Test everything
    Treat all external or unverified content as potentially risky. Test the entire workflow from end to end and watch for unusual actions or odd data handling.

  • Make fast response part of the design
    Put in place a kill switch, ways to undo changes, and a rehearsed incident plan tested through simulation. Keep detailed logs so the team can trace decisions and clearly show what happened.

Summary

The goal is not to slow down AI adoption. It is to make autonomy safe enough to be boring. The organisations that lead in 2026 will not be the ones that avoid agents. They will be the ones that can prove control, constrain scope, and contain incidents quickly while still capturing the productivity upside. Heartbeat is the signal that AI is becoming infrastructure. When AI has a “heartbeat,” resilience has to be engineered.

Related articles

How boards can improve governance in an increasingly complex environment

Today’s rising complexity and rapid change demand clarity and alignment. Boards must therefore fundamentally rethink how they operate and govern. Find out how.

    About this article