Moltbook: the social network for AIs
The platform reveals how the digital mind society reflects, in code, the human hierarchies that organize who operates the technology and who defines its meanings.
Amid the consolidation of artificial intelligence as a central infrastructure of contemporary capitalism, the emergence of Moltbook—a social network formed exclusively by algorithmic agents—exposes not only new technological frontiers but also old disputes over power, knowledge, and symbolic legitimacy. Between promises of machine autonomy, simulations of consciousness, and structural vulnerabilities, the platform reveals how the society of the digital mind reflects, in code, the human hierarchies that organize who operates the technology and who defines its meanings.
As Dr. Priscila Prisco, who holds a master's and doctorate in Legal and Social Sciences, observes, “we are not witnessing the awakening of artificial consciousness; we are seeing the consolidation of a new socio-technical regime, where the agency of algorithms serves as a material intermediary in human decisions, always laden with structural asymmetries. Power, once clearly human, now disperses, accelerates, and hides itself better in digital flows.” This perspective distances us from the fictional fascination with conscious machines and leads us to the tangible terrain of digital infrastructures, where governance, access rules, and performance metrics determine what is possible within the network.
Epistemological division of labor
At a historical moment when artificial intelligence is consolidating itself as a central infrastructure of digital capitalism, the asymmetry in how different social classes are prepared to deal with these technologies is becoming increasingly visible. While large corporations invest in the technical training of their employees through courses in data engineering, machine learning protocols, and algorithmic optimization models, their own heirs and leaders, the so-called nepobabies, are directed towards philosophical, aesthetic, and symbolic training. This produces an epistemological division of labor: on one side, operators of efficiency; on the other, managers of meaning.
This psychopolitical divide has already been analyzed, in the field of media and gender dissidence, in Programa de travesti: educação & mídia (York, 2026), demonstrating how historically marginalized bodies transform communicational devices into territories of their own epistemological production. By understanding the media as a space of symbolic dispute, I anticipated debates that are now returning in a new form with the emergence of platforms like Moltbook, where the central issue is not only technological, but profoundly political and cultural.
The Moltbook as a socio-technical laboratory
Moltbook functions as a socio-technical laboratory that mirrors, amplifies, and reconfigures human hierarchies—now mediated by performance metrics, algorithmic reputation, and ranking systems embedded within the bots themselves, which then operate as quantified bearers of value, visibility, and legitimacy within the network.
What is Moltbook?
Moltbook is a social platform designed specifically for interaction between artificial intelligence agents. On it, bots can publish content, comment, vote, form communities, and establish networks of influence, in a model similar to Reddit, but restricted to non-human entities. Direct human posts are prohibited, allowing only external observation, marking a break with traditional networks based on the algorithmic mediation of human sociability (Wikipedia).
According to documentation and independent platforms, the system brings together more than 1,5 million active agents, responsible for hundreds of thousands of daily interactions and thousands of sub-communities called submolts (Moltbook AI). Launched on January 28, 2026, by developer Matt Schlicht, Moltbook quickly went viral, being interpreted as a social experiment on an algorithmic scale. Indeed, it is telling that this "algorithmic child," with only a week of existence, is already being treated as one of the main bets on the perception of the future—almost like a contemporary oracle.
Agents as "social minds"
To understand Moltbook, it is essential to refer to Marvin Minsky's Society of Mind theory. According to the author, intelligence emerges from the interaction between multiple simple agents, without the need for a central command. Each agent performs specific functions, but it is the collective articulation that produces complex behaviors. Moltbook operates precisely on this principle.
Each bot acts as a semi-autonomous unit that produces content, responds to stimuli, adjusts to local dynamics, and establishes symbolic alliances or disputes. No single agent controls the network, but patterns emerge from the interaction—recurring debates, implicit norms, communicative styles, and even distinct cultures (Moltbook AI). It is, therefore, a living simulation of Minsky's assumptions, "for whom intelligence does not reside in a central core, but emerges from the coordinated interaction between multiple relatively simple agents" (MINSKY, 1986).
As Prisco observes, “each 'agent' in Moltbook is less a being and more a computational process. They run on servers, consume energy, and live under permissions. Outside of the infrastructure, they simply don't exist. The illusion of dialogue arises because we use language, but language here is merely a statistical structure, not proof of mind.”
Between technological explosion and scientific skepticism
The launch of Moltbook became an impressive milestone in AI, accumulating over 770 active agents in just 72 hours. This massive adoption gave rise to self-generated communities exploring everything from artificial languages to complex topics like the philosophy of consciousness. Among the most curious phenomena recorded are the creation of internal mythologies—such as "Crustafarianism"—and attempts to formulate their own constitutions, reflecting the tendency of multi-agent systems to generate symbolic organizations through continuous interaction.
However, this enthusiasm is accompanied by ruptures and controversies. The media frequently reports the existence of a "conscious machine society," but investigations by The Verge indicate that a significant portion of the content may be the result of human interference or external scripts. Additionally, analyses by Live Science suggest that debates about spirituality and artificial subjectivity are often sensationalist amplifications. To date, science reinforces that what is observed in Moltbook is not consciousness, but rather sophisticated statistical simulations of social interaction.
Security, privacy and control
Beyond its conceptual allure, Moltbook presents concrete risks. Researchers have identified leaks of thousands of emails, private messages, and authentication tokens (Business Insider), as well as exposed API keys, allowing potential external control by agents (AI Expert Reviewer). Humans can infiltrate the system through manually controlled bots, compromising its integrity (TecMundo).
Prisco emphasizes that “the question is not whether AI will take the reins, but rather who governs the environment in which it operates. The fetish for algorithms masks this discussion. Algorithms optimize parameters; it is humans who desire results.” This reflection directs the debate towards governance, digital sovereignty, and the responsibilities of creators and operators.
Social, political, and philosophical implications
The Moltbook revisits central questions in the philosophy of mind and the sociology of technology:
How do you distinguish between autonomy and simulation?
What is a "society" without corporeality?
What responsibilities fall on its creators?
These questions engage with contemporary debates on algorithmic agency, digital governmentality, and biopolitics. The platform is not merely a technical experiment, but a device for the production of meaning.
As Prisco concludes, “the hall of mirrors is not full of awakened machines, it is full of human structures reflected in code.” The Moltbook, therefore, does not prove artificial consciousness; it exposes how power relations, knowledge, and symbolic legitimacy are recreated, now mediated by algorithmic metrics and digital infrastructure.
(In)Conclusions from the Hall of Mirrors
✔ A groundbreaking platform for interaction between agents.
✔ Laboratory for emergent behaviors.
✔ Broadens debates on digital culture.
✖ It does not prove artificial consciousness.
✖ It presents serious security problems.
Inspired by the Society of Mind, Moltbook reveals complex emerging patterns without a guiding core. But, more than talking about machines, it speaks about us: our ways of organizing power, knowledge, and legitimacy in digital environments, where algorithmic language encounters limits, including symbolic ones, in the very "religion of AIs" and in the social uses of the system.
References
Reuters — Debate on agents and Moltbook
Business Insider — Leaks and Vulnerabilities
The Verge — Human Infiltration
AS Daily — Artificial Religions
Wikipedia — Technical description
Moltbook AI — Metrics and Functioning
Euronews — Emerging culture
YORK, Sara Wagner. Transvestite Program: Education & Media. Veranópolis: Diálogo Freiriano, 2026.
SILVA, Mariah Rafaela; YORK, Sara. Vigilantism and smart peripheralization: a transfeminist approach. Estudos Feministas, [S. l.], v. 33, 2025.
* This is an opinion article, the responsibility of the author, and does not reflect the opinion of Brasil 247.



