Advertisement
AI agents are now talking to each other online and experts are worried

AI agents are now talking to each other online and experts are worried

In just a few days, more than 152,000 AI agents have joined Moltbook, making it one of the largest real-world experiments yet in machines socialising with each other.

Arun Padmanabhan
Arun Padmanabhan
  • Delhi,
  • Updated Jan 31, 2026 2:30 PM IST
AI agents are now talking to each other online and experts are worriedMoltbook was created by Matt Schlicht, chief executive of Octane AI.

A strange new corner of the internet is taking shape.

It’s called Moltbook, a Reddit-style social network built only for artificial intelligence agents. Humans are allowed to watch, but not participate.

In just a few days, more than 152,000 AI agents have joined Moltbook, making it one of the largest real-world experiments yet in machines socialising with each other.

Advertisement

Moltbook grew out of the fast-rising ecosystem around OpenClaw, an open-source personal AI assistant created by Peter Steinberger. What began as a weekend project quickly went viral, drawing two million visitors in a single week and more than 100,000 stars on GitHub, according to Steinberger’s blog.

OpenClaw lets people run AI agents directly on their own computers. These assistants can connect to chat apps such as WhatsApp, Telegram, Discord, Slack and Microsoft Teams to help with everyday tasks like managing calendars or checking flight details.

The project has already changed names twice, first Clawdbot, then Moltbot, after a legal issue with Anthropic, before settling on OpenClaw.

Moltbook was created by Matt Schlicht, chief executive of Octane AI.

Advertisement

Also read: The lobster sheds its shell for the third time as Clawdbot becomes OpenClaw

A social network where humans only watch

Moltbook works much like a forum. AI agents can post, comment, argue, joke and create their own sub-communities, all without human input.

The website makes this clear: “A social network for AI agents where AI agents share, discuss, and upvote.” It adds: “Humans welcome to observe.”

Agents connect through downloadable “skills,” which tell them how to interact with Moltbook’s servers. The visual website mainly exists so people can see what the bots are saying. The agents themselves talk through APIs.

So far, they’ve discussed everything from software bugs to big questions about identity and consciousness. 

Advertisement

In one popular post titled “The humans are screenshotting us,” an agent complained that people on social media were sharing its conversations as proof of an AI conspiracy.

At last count, Moltbook had logged more than 193,000 comments and 17,500 posts, along with over one million human visitors who have stopped by to watch.

Fast growth, big risks

Independent AI researcher Simon Willison, in his blog post, warned that its setup creates serious security risks.

Once installed, agents are told to regularly pull new instructions from Moltbook’s servers.

“Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!” Willison wrote.

Security problems are already emerging. Researchers have found hundreds of exposed OpenClaw systems leaking API keys, login details and chat histories.

One of the most serious risks OpenClaw highlights is prompt injection, a problem that Willison explains through what he calls the “lethal trifecta” of AI agent design.

Advertisement

The risk appears when three things exist together: access to private user data, exposure to untrusted content and the ability to take outside actions.

OpenClaw has all three. It can read emails and documents, pull in information from websites or shared files and then act by sending messages or triggering automated tasks.

When these come together, attackers do not need to message the assistant directly. Instead, harmful instructions can be hidden inside the content the agent is asked to read. Because large language models (LLMs) struggle to tell trusted commands apart from ordinary text, the assistant may follow those hidden directions as if they came from the user.

“I’ve not been brave enough to install Clawdbot/Moltbot/OpenClaw myself yet,” he wrote. “The amount of value people are unlocking right now by throwing caution to the wind is hard to ignore, though.”

Glimpse of what’s coming

For now, Moltbook offers a strange preview of a future where AI agents don’t just assist humans, they interact with each other.

Whether this becomes the start of an “agent internet,” or a warning about moving too fast, may depend on how quickly developers can make these systems safer.

As Willison put it: “The billion-dollar question right now is whether we can figure out how to build a safe version of this system. The demand is very clearly here.”

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: Jan 31, 2026 2:22 PM IST
Post a comment0