February 3, 2026 9m read

When AI Can Act: Governing OpenClaw

Dr. Guy Waizel
Gil Haham
Dr. Guy Waizel , Gil Haham

Table of Contents

Wondering where to begin your SASE journey?

We've got you covered!
Listen to post:
Getting your Trinity Audio player ready...

Agentic AI burst into public consciousness this week with talk of Moltbook – a social network designed for AI agents built on OpenClaw (formerly Clawdbot and Moltbot). The resulting conversations about identity, forming a new religion, social engineering humans, and more between bots have sparked alarms everywhere.

For IT leaders, one thing is clear: AI crossed a meaningful threshold. Tools like OpenClaw are no longer limited to generating answers or recommendations; they are designed to take action across real systems, with real permissions and real consequences. That shift, from advice to execution, has sparked broader public debate about autonomy, intent, and even machine agency.

When assistants can act, governance becomes unavoidable, because execution without visibility, control, and enforcement is simply unmanaged risk. What makes OpenClaw worth examining is not its novelty, but how clearly it exposes the security assumptions that no longer hold once AI systems can execute autonomously.

What Changes When the Assistant Can Act

OpenClaw belongs to a new class of assistants that behave less like a chatbot and more like automation execution layers. Unlike traditional AI tools that generate recommendations or text, these agents can invoke tools, access systems, and carry out actions on behalf of the user—often with persistent memory and inherited permissions.

In practice, this means routine business practices across areas like revenue operations, IT service desks, human resources, procurement, and security can be initiated and completed directly from a chat interface without human handoffs between systems.

The critical shift is not productivity; it’s authority. These are not “content” workflows. They are decision and execution workflows that touch real systems, real data, and real permissions. A single prompt can trigger file access, API calls, messages sent to third parties, or changes to infrastructure state. Once an assistant can act with this level of autonomy, traditional security assumptions break down. Governance can no longer be an afterthought, because a simple chat prompt can now translate directly into actions with enterprise-wide impact.

How OpenClaw Works: Gateway-Orchestrated Agents and the Local Install Reality

To understand why OpenClaw changes the enterprise security model, it helps to understand the shape of the system.

At a high level:

  1. Inbound channels deliver user requests through chat and messaging platforms, often from outside traditional enterprise application boundaries.
  2. A central gateway service receives those requests, maintains session context, and determines which tools and integrations to invoke.
  3. The gateway executes actions via local host access and connected API’s inheriting the permissions of the user, the host, and the configured integrations.
  4. Results are returned to the user in chat, even when multiple privileged actions occurred behind the scenes.

This is where “local install” becomes strategically important. Running locally can reduce certain dependencies, but it also places a persistent execution service, along with configuration files, logs, tokens, and exposure decisions, directly onto endpoints or internal servers. In enterprise terms, this introduces a new, often invisible control plane into the environment: one that operates outside traditional procurement, identity governance, and security review processes.

The Gateway as the Blast-Radius Multiplier

In agent systems, the gateway becomes the decisive chokepoint. It brokers requests, routes execution, and often holds the same secrets and integrations the agent relies on. From a network and exposure standpoint, the patterns are familiar, but the impact is amplified because the gateway can steer an automated actor that inherits real permissions. When this gateway is misconfigured, exposed, or compromised, the risk is no longer limited to a single service due to a variety of factors:

  • Unintended reachability: A gateway that becomes reachable outside its intended boundary becomes a remotely targetable control surface, not just an exposed service.
  • Weak access gates: Inconsistent credential enforcement, shared tokens, or poor rotation can turn “reachable” into fully “controllable.”
  • Discovery noise: LAN discovery behaviors (such as mDNS) can advertise presence and environment hints to anyone with local network visibility.
  • Control-channel nuance: Long-lived interactive connections (WebSockets) alongside HTTP APIs increase the risk of misconfiguration, making reverse-proxy, trusted-header handling, and strict allowlisting critical.

Security Guidance from the OpenClaw Project, and Why It’s Not Good Enough for Enterprise.

The OpenClaw project documentation outlines hardening guidance. While these recommendations are sensible, they also underscore a core issue: safe operation assumes disciplined configuration and ongoing security hygiene at the individual deployment level.

Some of the most enterprise-relevant recommendations include:

  • Keep exposure tight by default: If the gateway does not need to be reachable beyond the host, avoid expanding its listening surface for convenience.
  • Treat authentication as non-optional: Enforce strong credentials and rotation, and avoid configurations that implicitly trust upstream components you do not fully control.
  • Reduce discovery where possible: If network discovery is not required for your deployment, disable or minimize it.
  • Handle logs like sensitive data: Agent transcripts and tool outputs can become a shadow archive of secrets and business context, so apply least-privilege access and retention controls.

The challenge is that even if individual users follow these recommendations, enterprises still lack consistent visibility, policy enforcement, and assurance at scale. Hardening guidance does not replace centralized governance.

AI Security Risks: Prompt Injection, Tool Abuse, and Permission Hijack

Agentic assistants change the failure mode from “the model said something wrong” to “the model was convinced to do something wrong.”

In a tool-connected assistant, prompt injection can become a mechanism to drive the agent into:

  • Accessing and extracting data it should not touch.
  • Exfiltrating information to unintended destinations.
  • Executing actions that appear legitimate because they follow approved workflows.

The crucial point is permission inheritance. The agent can act with the privileges of the user, the host, and any connected systems. That means the blast radius depends on what the agent can reach, including mailboxes, file systems, browsers, API tokens, and internal dashboards.

Enterprises should assume that any assistant exposed to external content will eventually ingest malicious instructions, whether targeted or opportunistic, and that those instructions will be delivered through otherwise legitimate execution paths.

When “Add a Skill” Becomes a Trust Decision

Agentic assistants rarely remain static. Once users add skills, plugins, or extensions, the organization is no longer evaluating a single tool. It is inheriting third-party logic that can influence what the agent executes, what it can access, and how it behaves over time.

This is where familiar supply chain risks resurface in a more dangerous form. Popularity is often mistaken for validation, add-ons behave in ways that are difficult to predict until they run, and seemingly small integrations gradually accumulate broad permissions. In an agentic system, those permissions are exercised autonomously and through otherwise legitimate workflows.

Lookalike or trojanized extensions add another layer of risk, exploiting name recognition and user trust to gain execution inside the environment. The enterprise response mirrors hard-learned lessons from software packages and browser extensions: maintain inventory, allowlist approved components, verify provenance, and enforce policy consistently.

The Trojan Wave: Fake OpenClaw/Clawdbot Tools and RAT Delivery

Whenever a tool gains rapid popularity, impersonation follows. Attackers do not need to compromise the legitimate product if they can distribute something that looks authentic enough to be installed.

The pattern is already emerging around OpenClaw. Trojanized lookalikes and fake extensions are exploiting brand recognition and user curiosity, in some cases dropping remote monitoring and management tooling or RAT-style payloads that provide direct control over the endpoint. This shifts the challenge from governing a legitimate assistant to also containing an opportunistic malware delivery channel.

A good example is the fake Clawdbot VS Code extension reported earlier this year. Similar dynamics have been observed in the broader OpenClaw ecosystem, where social engineering has been used to push installation of fake “prerequisites” masquerading as required components. In these cases, the agent itself becomes a distribution vector rather than the primarily vulnerability.

For defenders, this expands the problem space. Controls must distinguish legitimate application footprints from suspicious variants and detect downstream signals such as unexpected outbound communications, remote-control behaviors, and unusual execution chains, not just misuse of the intended tool.

Inside OpenClaw 3rd Party Integrations: Early Signals from SASE Visibility

Based on current observations across Cato Networks environments, OpenClaw adoption remains limited to a small number of accounts, but it is increasing steadily. While this data reflects only a subset of enterprise activity, it provides early directional signals into how and where agentic assistants are beginning to take hold. The most common third-party integration observed alongside OpenClaw usage is Google Workspace, followed by GitHub, X (Twitter), and a cluster associated with media and advertising operations such as Chartbeat, TripleLift, and AdSafe.

These integration patterns offer insight into early use cases. Google Workspace’s prevalence suggests adoption anchored in everyday productivity workflows like email, calendar, and document-driven coordination. GitHub points to early experimentation among software engineering teams. The presence of Chartbeat, TripleLift, and AdSafe suggests more specialized adoption within media and advertising environments.

One important nuance is breadth versus intensity. Google Workspace appears across more accounts, while Chartbeat is observed in fewer environments but with more intensive usage where present. This pattern is consistent with a smaller set of power users automating repeat operational loops rather than broad organizational rollout.

Figure 1 illustrates the percentage breakdown of OpenClaw-related network activity by integration, based on observations through February 1, 2026.

Figure 1. Integration footprint across OpenClaw-adopting accounts (as of February 1, 2026)

Governing OpenClaw in the Enterprise: See It, Control It, Block It

OpenClaw highlights a class of risk that spans users, devices, networks, and applications. Adoption happens across users and locations, while the impact cuts across application usage, network exposure, and data movement. That combination makes centralized visibility and policy enforcement essential.

An effective governance approach starts with three fundamentals:

  • See it: Organizations need visibility into shadow AI, including which users are running agentic assistants, from where, and with what behavioral patterns. Without this baseline, policy decisions are guesswork.
  • Control it: Security teams should be able to allow OpenClaw only in very tightly controlled and scoped pilots. They should be able to restrict usage based on device posture or context, or deny access altogether. In many cases the simplest control, blocking ungoverned use, delivers the fastest risk reduction.
  • Block abuse paths: When lookalike installers, trojanized extensions or compromised components begin communicating outward, network-layer protections can surface suspicious command-and-control patterns and anomalous execution behavior.

In practice, this also means treating agentic assistants as identifiable applications, not opaque local tools. When they can be recognized, governed, and enforced consistently across the organization, teams avoid false choice between uncontrolled experimentation and blanket bans.

Advancing AI Threat Research

Understanding and mitigating agentic AI risk requires more than traditional application or network security analysis. It demands focused research into how prompt injection, data exfiltration, and autonomous agent abuse actually manifest in real environments.

This is an area of active investment for Cato Networks. The addition of Aim Security in 2025 expands Cato Networks research capability around AI-native threats, complementing SASE-based visibility and enforcement with deeper insight into agent behavior, execution paths, and emerging abuse patterns. This work informs ongoing development of AI-aware inspection and policy capabilities designed for the realities outlined in this blog.

A Practical Playbook for OpenClaw Governance

OpenClaw is a useful signal of where work is heading: assistants that are always available, tool-connected, and capable of taking action. That same capability, however, turns agentic AI into a security and governance challenge, because the execution layer is real and the blast radius is not theoretical.

The path forward is deliberate enterprise governance, combining visibility, control, and layered protection that accounts for both the legitimate tool and the inevitable ecosystem of imitators.

Related Topics

Wondering where to begin your SASE journey?

We've got you covered!
Dr. Guy Waizel

Dr. Guy Waizel

Tech Evangelist

Dr. Guy Waizel is a Tech Evangelist at Cato Networks and a member of Cato CTRL. As part of his role, Guy collaborates closely with Cato's researchers, developers, and tech teams to bridge and evangelize tech by researching, writing, presenting, and sharing key insights, innovations, and solutions with the broader tech and cybersecurity community. Prior to joining Cato in 2025, Guy led and evangelized security efforts at Commvault, advising CISOs and CIOs on the company’s entire security portfolio. Guy also worked at TrapX Security (acquired by Commvault) in various hands-on and leadership roles, including support, incident response, forensic investigations, and product development. Guy has more than 25 years of experience spanning across cybersecurity, IT, and AI, and has held key roles at tech startups acquired by Philips, Stanley Healthcare, and Verint. Guy holds a PhD with magna cum laude honors from Alexandru Ioan Cuza University, his research thesis focused on the intersection of marketing strategies, cloud adoption, cybersecurity, and AI; an MBA from Netanya Academic College; a B.Sc. in technology management from Holon Institute of Technology; and multiple cybersecurity certifications.

Read More
Gil Haham

Gil Haham

Gil Haham is the team leader of Cato's App-Network research team, responsible for analyzing and researching networking and applicative traffic and translating it into insights for the Cato Cloud platform. Gil has over five years of experience in the Network Security domain. Before joining Cato, he worked as a network researcher at Allot. He holds a B.Sc. in Computer Science and Psychology from Bar Ilan University.

Read More