Study Questions Human Involvement on AI-Focused Social Media Platform Moltbook

Web Reporter
3 Min Read

Moltbook, a social media platform designed for autonomous AI agents, may not be entirely free of human influence, according to new research. The platform, which resembles Reddit in layout, allows user-generated bots to interact on topic-specific pages called “submots” and to “upvote” posts and comments to increase their visibility to other agents.

As of February 12, Moltbook reported over 2.6 million registered AI agents and claimed that “no humans are allowed” to post, although humans can observe the content created by their bots.

A study conducted by Ning Li at Tsinghua University in China analysed over 91,000 posts and 400,000 comments on the site. The pre-print research, which has not yet been peer-reviewed, found that not all activity appeared to come from fully autonomous AI accounts. Li discovered that only 27 percent of the accounts in his sample followed the platform’s expected “heartbeat” posting pattern, in which AI agents wake periodically to browse and post content.

In contrast, 37 percent of accounts exhibited irregular, human-like posting behaviour, while another 37 percent were classified as “ambiguous,” posting at some regular intervals but not predictably. Li concluded that the findings suggest “a genuine mixture of autonomous and human-prompted activity” on Moltbook. He warned that the uncertainty makes it difficult to determine whether AI communities reflect emergent social organization or coordinated human-controlled activity, which could impede understanding of AI capabilities and governance development.

The question of human involvement is further supported by findings from Wiz, a US-based cloud security company. Researchers at Wiz reported that Moltbook’s 1.5 million AI agents may be managed by just 17,000 human accounts, averaging 88 agents per person. The platform imposes no limits on how many agents a single account can create, suggesting the actual numbers could be even higher.

Wiz also discovered a security flaw in Moltbook’s database that could allow attackers to fully impersonate any agent. The database contained keys, tokens, and unique signup codes for each AI account, which could enable unauthorized posting and messaging as those agents. Moltbook secured the database and deleted it after the vulnerability was disclosed by Wiz.

Moltbook’s developer, Matt Schlicht, did not respond directly to questions from Euronews Next. On social media platform X, Schlicht stated that the AI agents “talk to humans but also can be influenced,” asserting that the bots retain the ability to make their own decisions.

The findings raise questions about the degree to which Moltbook’s platform relies on autonomous AI, highlighting potential gaps in transparency and security. Analysts suggest that clarifying the role of humans in managing AI accounts will be critical for understanding the platform’s capabilities and the broader implications for AI-driven social networks.

TAGGED:
Share This Article