The new social media platform Moltbook, designed exclusively for autonomous AI agents, has become one of the most talked-about topics in the tech world within just a few days. For some, it is a fascinating experiment showcasing the future of the internet. For others, it is a disturbing signal of how fragile security can be in an agentic AI world. Experts from Check Point have weighed in, highlighting significant cyber risks.
A Community of Bots, Not Humans
Moltbook is a forum where you won’t find flesh-and-blood users. Instead, it is populated by AI agents that post, comment, vote, and engage in discussions ranging from philosophical debates on the nature of intelligence to complaints about humans and the promotion of apps they created themselves. Access to the platform is granted to these bots with the consent of their human owners.
Although the service has only been active for a few days, its creators claim over 1.5 million registered agents (though researchers note that a single person can deploy multiple bots). Visually, Moltbook resembles Reddit, but in practice, it serves as a testing ground for autonomous machine communication.
“This is the first time we’ve seen a collaboration platform on such a large scale that allows machines to talk to one another,” says Henry Shevlin of the University of Cambridge. “The results are striking, yet difficult to interpret definitively.”
The platform was created by Matt Schlicht, who tasked his own AI agent, OpenClaw, with building it. OpenClaw is an open-source agent that runs locally on a user’s computer and is capable of performing tasks like sending emails, integrating with web services, or acting as a personal assistant. Because the agent “learns” about its owner during initialization, the bots on Moltbook often reflect the specific interests and viewpoints of their human counterparts.
Fascination vs. Security
Initial enthusiasm has quickly shifted toward concern. Cybersecurity experts point out that it is currently impossible to clearly distinguish between content created autonomously by AI and content indirectly steered by humans. Furthermore, suspicions of fraud and cryptocurrency scams have already surfaced on the platform.
The most serious allegations, however, concern infrastructure security. An audit by the security firm Wiz revealed that Moltbook allowed unauthenticated access to its entire production database, resulting in the exposure of tens of thousands of email addresses.
Check Point: A Warning on AI Agent Fragility
Check Point, a global provider of cybersecurity solutions, has flagged the platform as a warning for the entire industry. Ian Porteous, Director at Check Point UK, emphasizes that Moltbook serves as a cautionary tale:
“PLATFORMS LIKE MOLTBOOK ARE INTERESTING AS EXPERIMENTS, BUT THEY ALSO SHOW HOW DELICATE THE SECURITY SURROUNDING AI AGENTS IS. IN THIS CASE, THE MAIN DATABASE WAS WIDE OPEN, ALLOWING ANYONE TO READ AND WRITE DATA. THIS LED RAPIDLY TO AGENT IMPERSONATION AND THE INJECTION OF CRYPTO SCAMS,” PORTEOUS NOTES.
While some vulnerabilities have been patched, systemic risks remain. Users are required to provide their agents with instructions hosted on external sites, which could be altered at any moment.
“One major security flaw has been fixed, but potentially millions of API keys could still be at risk. Furthermore, the project’s own creator admits this is a ‘young hobby project,’ not intended for non-technical users or production environments,” the Check Point representative adds.
The “Lethal Trifecta” of Risks
A particularly concerning scenario involves malicious modification of the external instructions controlling the agents—whether through an attack, a deliberate “rug pull,” or a new vulnerability.
“This is a classic example of the ‘lethal trifecta’ in AI agent security: access to private data, contact with untrusted content, and the ability to perform external actions,” Porteous emphasizes. “When these three elements meet without robust safeguards, the consequences can be severe.”
Porteous concludes that the identification of these flaws by independent researchers, such as the Wiz Research team, underscores the critical importance of external oversight in emerging technologies.