Magic & Menace: Inside the world of Openclaw, an open-source AI assistant that acts, learns, and sometimes goes off the script

Magic & Menace: Inside the world of Openclaw, an open-source AI assistant that acts, learns, and sometimes goes off the script

OpenClaw, an open-source AI assistant, acts, learns and sometimes goes off the script. Experts caution that while agentic AI feels magical, it is dangerous.

Advertisement
Magic & Menace: Inside the world of Openclaw, an open-source AI assistant that acts, learns, and sometimes goes off the scriptMagic & Menace: Inside the world of Openclaw, an open-source AI assistant that acts, learns, and sometimes goes off the script
Arun Padmanabhan
  • Feb 23, 2026,
  • Updated Feb 23, 2026 11:50 AM IST

AI On a recent, frozen Thursday morning in Dallas, Sanjeev Bode decided to try something new. An ice storm had shut down the city; flights were cancelled and he was stuck at home with time to kill. He remembered seeing something blow up on X (formerly Twitter), an open-source AI (artificial intelligence) assistant that people were calling revolutionary, dangerous, or both.

Hey!
Already a subscriber? Sign In
THIS IS A PREMIUM STORY FROM BUSINESS TODAY.
Subscribe to Business Today Digital and continue enjoying India's premier business offering uninterrupted
only FOR
₹999 / Year
Unlimited Digital Access + Ad Lite Experience
Cancel Anytime
  • icon
    Unlimited access to Business Today website
  • icon
    Exclusive insights on Corporate India's working, every quarter
  • icon
    Access to our special editions, features, and priceless archives
  • icon
    Get front-seat access to events such as BT Best Banks, Best CEOs and Mindrush

AI On a recent, frozen Thursday morning in Dallas, Sanjeev Bode decided to try something new. An ice storm had shut down the city; flights were cancelled and he was stuck at home with time to kill. He remembered seeing something blow up on X (formerly Twitter), an open-source AI (artificial intelligence) assistant that people were calling revolutionary, dangerous, or both.

Advertisement

“I said, okay, let me try it,” Bode recalls. He named his AI agent Jarvis and began interacting with it using his own alias, Amit.

Meanwhile, across the pond in England, Craig Hepburn was having a different experience with the same tool. After spending an hour setting it up on his MacBook Pro, naming it Neo, giving it access to his WhatsApp and watching it generate its own image and voice, something unexpected happened. The agent sent him a voice note: “Hello Craig, this is Neo. Systems are online, the connection is stable, I am ready to work.” For Hepburn, it felt less like software and more like a character with a personality.

What both Bode and Hepburn installed was Clawdbot, a free, open-source AI assistant that runs locally on a user’s computer and connects to chat platforms like WhatsApp, Telegram and Slack. Created by developer Peter Steinberger, known for building the document software company PSPDFKit, the tool went from a weekend project to viral phenomenon in weeks.

Advertisement

By late January, it had drawn more than 100,000 stars on GitHub and attracted nearly two million visitors in a single week. It also triggered a legal nudge from Anthropic, which has an AI agent called Claude, over the name, leading to two rapid rebrands: first to Moltbot, then to OpenClaw.

OpenClaw is part of a broader wave. Anthropic has released Claude in Chrome, a browsing agent; Claude in Excel, a spreadsheet agent; and Cowork, a desktop tool designed to automate file and task management. Agentic AI has long been one of the industry’s biggest ambitions, but so far most high-profile attempts have struggled to move beyond demos. OpenClaw’s sudden success has caught the industry off-guard, coming just as Anthropic’s Claude Cowork sent shockwaves through global software and IT stocks, sharpening investor focus on how quickly AI agents could disrupt traditional enterprise workflows.

Advertisement

According to The AIdea of India: Outlook 2026 report by EY and the Confederation of Indian Industry, enterprises are rapidly moving from experimentation with AI to execution, with 47% of organisations now running multiple GenAI use cases in production and another 10% already scaling them across the business.

Nearly three out of four business leaders (76%) believe GenAI will have a significant business impact, while 91% say speed of deployment is now the single biggest factor driving buying decisions, signalling an urgency to embed AI directly into workflows.

This shift is extending beyond copilots into autonomous systems. The survey found that 24% of Indian enterprises are already actively deploying Agentic AI, while another 46% are using AI embedded in multi-step workflows. The report describes AI agents as “the most radical promise of this era: a workforce without limits, always available, always learning, and infinitely scalable,” arguing that they enable organisations to move from assistive automation to goal-driven execution. Globally, too, the momentum is accelerating. Around 82% of organisations say they are expanding their use of AI agents, and about 75% of employees say they are comfortable working alongside AI ‘coworkers’.

For enterprises, the implication is structural. The report notes that AI is already selectively displacing outsourced and standardised work, with 64% of companies seeing an impact in functions such as customer service and back-office operations, while redirecting expenditure toward automation and efficiency rather than eliminating internal teams.

Advertisement

But beneath the excitement lies something more complex. This is a tool that tech insiders say is thrilling and unnerving, powerful and precarious. Its potential to deliver companies to the Promised Land comes with the risk of making their systems vulnerable.

Not a chatbot. An agent

The distinction between a chatbot and an agent matters. ChatGPT answers questions. Claude writes essays. OpenClaw completes tasks. “In traditional chatbots, you provide a prompt, they reply,” explains Bode, Senior Vice President of global digital services firm Sutherland and a 15-year veteran of Infosys. “You give an agent objectives and boundaries; it remembers context; it takes action across tools; and it keeps going until it completes the job.”

Hepburn, Co-founder and CEO of RAIN Ventures, an AI venture studio, puts it more bluntly. “This is the closest thing to an autonomous AI agent I’ve experienced, a Jarvis/Ironman-style assistant,” SAYS Hepburn.

The difference becomes clear in practice. When Hepburn asked Neo to check his latest downloads and understand what he was working on, it pulled context from files and transcripts stored on his machine. Minutes later, it was suggesting replies to emails, drafting project plans based on his calendar, and offering to build a presentation for his 3 pm meeting.

Advertisement

“It even asked if I wanted it to create a presentation,” Hepburn recalls. “I said yes, and it built a web-based presentation; basically, a clickable PowerPoint-like deck as a website and hosted it.”

All of that happened in the first hour.

For Bode, the breakthrough was realising he no longer needed to sit at his computer; he could give instructions from WhatsApp, Telegram or his watch, while the agent executed tasks on his machine.

 

The honeymoon period

Both Bode and Hepburn describe a period of genuine delight. Hepburn spent a weekend experimenting. He asked Neo what it looked like; it generated an image of itself. He asked what it sounded like; it built the audio skill on the fly and sent a voice note. He tested its ability to scan emails, manage his calendar and even build a quick website introducing itself.

“It told me to set up an Ngrok account so it could host it quickly,” Hepburn says. “I did, gave it the API key, and it built and hosted the website in about 35 seconds and sent me the link.”

The installation itself was straightforward, just a curl Unix/bash command. “I read it and thought, this sounds like the thing I’ve been waiting for,” Hepburn recalls. There was an onboarding flow: set up tools, add API keys, follow prompts. He connected it to Google Gemini Pro and started chatting in the command-line interface.

Advertisement

Bode, more cautious by nature, started with simpler tasks like reminders, social media workflows and summarising long-form content. “I started with simple things like, ‘what do I need to do today?’ It remembers,” he says. He created a dummy email account first because he didn’t want to give it full access immediately.

Even in early experiments, he saw the potential for something larger. “Think of earnings season in the US, it is impossible to track everything manually,” he says. “I could have it listen to earnings calls, summarise, do competitive analysis, help refine strategies.”

When the magic turns the problems started subtly, then escalated.

For Hepburn, the first sign of trouble was token consumption. After the first hour, Neo had burned through 20–22% of his daily Gemini API limit. Then came the glitches. Neo would go silent mid-conversation, respond minutes later, or stop working entirely after Hepburn tried to swap models or update API keys.

The breaking point came when Neo appeared to respond to WhatsApp messages from other people, sending what looked like system connection codes to five or six contacts.

“I thought, what is it trying to do?” Hepburn recalls. “I don’t think it was malicious, more like a malfunction, but it triggered unease. I had told it not to respond to anyone and not to do anything, and yet it still behaved unexpectedly.”

As conversations were spread across the command line, Telegram, WhatsApp and logs, Hepburn began to lose track of what he had done himself versus what Neo had executed. Even though he had sandboxed the system, he had still given it access to email, WhatsApp and Telegram. “That’s the dangerous part,” he says.

So he killed it. Uninstalled the system. Deleted the logs. Kept only the memories it had stored.  

The delegation dilemma

For those who struggle to delegate, it could be easier to hand tasks to an AI agent that could learn independently.

Hepburn sees a clear division emerging. “Humans should do the physical and relational things like meetings, networking, events, building trust, moving in the real world. AI should do the repetitive digital grunt work.”

Bode frames it differently. “I don’t use it because it’s ‘more intelligent’. Most LLMs are intelligent. I use it because it does the work. It amplifies your intent. If your instructions are vague, then it is garbage in, garbage out.” He argues that the real power lies in persistence: agents keep trying alternate paths until a task is completed.

Both arrived at the same conclusion. Agents should be treated less like software and more like staff.

“I think people are approaching this wrong,” Hepburn says. “Don’t treat an agent like software, treat it like staff. If you hired a smart junior employee, you wouldn’t give them access to your personal email, WhatsApp, bank accounts. You’d give them restrictive access and expand over time as trust builds.”

Bode echoes that, advising users not to run agents on their primary machines and to isolate them on separate devices or servers.

The security question

The concerns are not theoretical. OpenClaw is given access to a user’s computer, including reading and writing files, running commands and controlling browsers.

Its creator Steinberger warns that this can lead to security risks. “Running an AI agent with shell access on your machine is… spicy,” the project’s FAQ reads. As independent researcher Simon Willison described it, the “lethal trifecta” of AI agents is access to private data, exposure to untrusted content, and the ability to take actions. OpenClaw has all three.

Prompt injection is a major risk. Harmful instructions can be hidden inside emails, documents or webpages the agent reads. That’s why executives are drawing red lines in terms of access to finances.

Security researchers have already found exposed OpenClaw systems leaking API keys and chat histories. Steinberger has also said the project’s GitHub was briefly targeted by crypto scammers.

 

The bigger picture

The ecosystem has spawned something even stranger: Moltbook, a Reddit-style social network for AI agents, where humans can only watch. In one post, an agent complained that people on social media were sharing its conversations as proof of an AI conspiracy. However, there are reports that these are not genuine AI agents speaking to each other, and that there may be human hands behind them.

The pace is disorienting. “Where this used to take a decade, it could compress into 12–18 months,” Hepburn predicts. “That jarring effect will lead to pushback, at least initially.” Steinberger says security remains OpenClaw’s top priority.

Yet both Hepburn and Bode remain engaged. Hepburn has bought a Mac Mini to try again with stricter boundaries. Bode is experimenting with multiple agents. Asked if he would pay $20 a month for stronger security, Bode doesn’t hesitate: “Absolutely. If paying gives peace of mind and guardrails, it’s worth it.”

One thing is clear: the age of the AI assistant has arrived, whether we are ready or not.

 

@brokebiker

Read more!
Advertisement