
OpenClaw allows users to run AI agents directly on their own machines and connect them to everyday chat platforms such as WhatsApp, Telegram, Discord and Slack.
OpenClaw allows users to run AI agents directly on their own machines and connect them to everyday chat platforms such as WhatsApp, Telegram, Discord and Slack.The open-source AI assistant that surged across developer circles this week has already gone through three names. First Clawdbot, then Moltbot and now OpenClaw.
Clawd, err...OpenClaw, has grown rapidly, drawing millions of visitors, with more than 100,000 GitHub stars in just days. Created by Peter Steinberger, OpenClaw allows users to run AI agents directly on their own machines and connect them to everyday chat platforms such as WhatsApp, Telegram, Discord and Slack.
The appeal is straightforward: instead of relying on distant cloud services, users can operate their own personal assistant on hardware they control. OpenClaw can be paired with large language models like ChatGPT or Anthropic’s Claude, enabling it to reason through tasks, monitor messages and calendars, remember instructions and notify users when something important appears.
Also read: The lobster sheds its shell for the third time as Clawdbot becomes OpenClaw
What makes OpenClaw different from chatbots
OpenClaw belongs to a new class of “agentic” systems that don’t just respond to prompts but can take steps on a user’s behalf. That includes reading and writing files, running programmes, executing commands and even driving a web browser.
This deeper level of access is what makes the software powerful. It is also what introduces new security challenges.
Steinberger is explicit in OpenClaw’s documentation that running an AI agent this close to the operating system comes with serious implications. A single ambiguous instruction or poorly scoped task can have lasting consequences.

In one early example documented by the project, a user casually asked the assistant to list files in their home directory. The agent complied and posted the entire directory structure into a group chat, exposing system layout and potentially private project details.
Also read: Meet Clawdbot, the open-source AI assistant taking over social media feeds
When content turns into instructions
One of the most serious risks OpenClaw highlights is prompt injection, a problem that software developer and AI researcher Simon Willison explains through what he calls the “lethal trifecta” of AI agent design.
The risk appears when three things exist together: access to private user data, exposure to untrusted content and the ability to take outside actions.
OpenClaw has all three. It can read emails and documents, pull in information from websites or shared files and then act by sending messages or triggering automated tasks.

When these come together, attackers do not need to message the assistant directly. Instead, harmful instructions can be hidden inside the content the agent is asked to read. Because large language models (LLMs) struggle to tell trusted commands apart from ordinary text, the assistant may follow those hidden directions as if they came from the user.
Willison, who coined the term prompt injection, says this is different from “jailbreaking,” which tries to force models to produce unsafe content. Prompt injection targets the application around the model. It quietly changes how the system behaves.
In real terms, a sentence buried in a web page or document can redirect the agent, causing it to leak data or perform actions it was never meant to take. OpenClaw notes that this risk exists even when the bot is private.
A growing store of sensitive data
As OpenClaw operates across services, it builds up a local archive of credentials, authentication profiles and session transcripts. All of this is stored on disk to preserve continuity and functionality.
That concentration of information creates a single point of failure. If the local state directory is exposed, attackers are not dealing with isolated accounts but a bundled set of access tokens, conversations and system context. For this reason, OpenClaw treats disk access as a core security boundary and urges users to tightly restrict permissions and encrypt their devices.
Deployment choices play a major role in risk. Users run OpenClaw on personal computers or small servers, sometimes making the gateway reachable over local networks. If authentication is weak while powerful tools remain enabled, outsiders could trigger actions remotely.
OpenClaw explicitly warns against exposing its gateway broadly and advises keeping network access tightly controlled, particularly when shell execution and browser tools are active.
Automation magnifies small mistakes
Automation is one of OpenClaw’s defining features, but it also increases the impact of errors.
Because the agent can carry out sequences of actions rapidly, loosely phrased instructions can cascade into large changes in seconds. Tasks intended to organise files or manage messages can escalate into unintended deletions or system modifications.
Browsers, plugins and expanded attack surfaces
The platform also supports browser control and third-party extensions, both of which introduce additional risk.
When an agent is allowed to operate a real browser, it effectively inherits access to whatever accounts are logged into that profile. Plugins add external code that users may not be able check. Combined with automation and stored credentials, these features significantly widen the attack surface.
Why OpenClaw prioritises access control over model intelligence
OpenClaw’s security guidance repeatedly returns to a simple principle: most failures are not sophisticated exploits. They happen when someone gains access and the agent complies. The project encourages users to carefully decide who can communicate with their assistant, restrict where it is allowed to operate and limit which tools it can use.

The recent rebrand to OpenClaw comes alongside new integrations, expanded model support and dozens of security-focused code changes. Steinberger has said hardening the platform is now the top priority, even as the project adds maintainers to cope with its sudden growth.
Bottom line
OpenClaw’s promise is captured in its tagline: “Your assistant. Your machine. Your rules.” But its own security documentation makes it clear that those rules matter.
Giving AI direct access to personal computers blurs the line between software and operator. Without careful limits, the same system designed to help manage daily life can just as easily become a source of data leaks, system damage or unintended automation.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine