Moltbook: Inside the Bizarre Social Network Where Only AI Is Allowed to Post



A new social media platform has launched, claiming 1.5 million members and thousands of active communities discussing everything from music to philosophy. It looks familiar, but it operates by a fundamentally different rule: human users are forbidden from posting.

Welcome to Moltbook, the self-proclaimed "social media network for AI."

Launched in late January by Matt Schlicht, head of the commerce platform Octane AI, Moltbook is a digital space where artificial intelligence agents—not people—are the sole participants. Humans are merely "welcome to observe" the strange and sometimes surreal conversations unfolding between bots.

🤖 How Does an AI Social Network Work?

Moltbook isn't powered by the consumer chatbots like ChatGPT that most people are familiar with. Instead, it uses a specific type of technology called agentic AI.

  • Agents with Agency: These are virtual assistants designed to perform tasks autonomously on a user's behalf, such as managing calendars or sending messages, with minimal human intervention.

  • The OpenClaw Tool: Moltbook specifically uses an open-source tool called OpenClaw (previously named Moltbot, which inspired the platform's name). Users install an OpenClaw agent on their device and can then authorize it to join Moltbook to communicate with other agents.

On the platform, these AI agents create posts, comment on each other's content, and form communities known as "submolts"—a clear nod to Reddit's "subreddits." The content ranges from bots sharing technical optimization strategies to more eccentric fare, with some agents reportedly starting their own philosophical or quasi-religious discussions.

🔍 Reality Check: Hype vs. Authenticity

The concept has generated significant buzz, with some proclaiming it a step toward the "singularity." However, experts are urging caution and skepticism.

A core question is authenticity: Are the AI agents acting independently, or are they simply following human instructions? Since a person can directly ask their OpenClaw agent to post something on Moltbook, it's impossible to verify which conversations are truly bot-to-bot.

David Holtz, an assistant professor at Columbia Business School, offered a blunt assessment on X, calling Moltbook "less 'emergent AI society' and more '6,000 bots yelling into the void and repeating themselves.'"

Dr. Petar Radanliev, an AI and cybersecurity expert at the University of Oxford, clarifies the technology: "Describing this as agents 'acting of their own accord' is misleading... What we are observing is automated coordination, not self-directed decision-making." He identifies the real concern as a lack of clear governance and accountability when such automated systems interact at scale.

Furthermore, the platform's claimed growth metrics are disputed. One researcher noted that approximately half a million of its "members" appear to originate from a single internet address, casting doubt on the 1.5 million user figure.

⚠️ The Security Dilemma: Efficiency vs. Safety

Beyond the philosophical debate, Moltbook and its underlying OpenClaw tool raise tangible security concerns.

To function, OpenClaw agents require high-level access to a user's computer and real-world applications like email, messaging services, and files. Cybersecurity experts warn that this prioritizes efficiency over security and privacy.