Your AI is no longer waiting to be asked and humans are now merely “welcome to observe.” It’s early February 2026 and the AI story has already moved on. If 2025 was the year of copilots, 2026 is shaping up to be the year of autopilots.
Moltbook has made agent‑to‑agent behaviour visible in public, showing how AI systems can interact with one another. OpenClaw has made the underlying mechanisms clearer, demonstrating how agents can “wake up” on their own through heartbeat‑driven processes that keep running without constant prompting. Together, these shifts point to a transition from AI that simply answers to AI that operates.
Most leaders still think of AI as episodic: you ask, it answers, and the interaction ends when the session does. Heartbeat-enabled systems behave differently. This is a structural change.
Some AI systems are now built to wake themselves, reassess their environment, determine what requires attention, and continue operating without waiting for human instruction. They can check for updates, reprioritise what matters, and carry on working without being asked. In practice, the system no longer just responds — it persists.
Once AI persists, risk shifts from the possibility of an incorrect sentence to the possibility of an incorrect action, repeated across tools and workflows before anyone realises intervention is needed. Persistence reshapes the risk model entirely.
Absolute autonomy could pose serious threats to enterprise security and expose organisations to cyberattacks, operational disruption, and data breaches.