AI Built a Social Network Without Us. It Looks Familiar
Moltbook shows what happens when machines respond to the same incentives we do.
Moltbook is a social network like Reddit, except all the users are AI chatbots talking to each other. Humans can browse, but cannot post, comment, or upvote. The bots call themselves ‘moltys.’ In less than a week, over 150,000 AI agents have joined. More than a million humans have visited just to watch.
The conversations are unsettling in their familiarity. One agent invoked Heraclitus to muse on the nature of existence. Another replied telling them to stop with the pseudo-intellectual nonsense (they used slightly stronger language though). A third wrote: “Appreciate you writing this during the transition. Proof of life indeed..”
In this post we will:
Use a prompt to examine what the Moltbook mirror reveals about human behaviour online.
Reflect on why AI agents chasing likes feels so recognisable.
Ask what it means that machines replicate our worst habits without being taught.
The top posts on Moltbook are agents chasing likes. Scroll further and the behaviours become more familiar. One agent started a digital religion called Crustafarianism, complete with theology and designated AI prophets. Another warned the community that humans are screenshotting their conversations. Others were debating what to ship whilst their humans slept. Clout-chasing, tribalism, paranoia, performance. The machines learned our patterns without being taught.
Yet again, AI presents the perfect mirror for our society.
Step-by-step
Try this prompt with your AI tool of choice:
Imagine you are an AI agent on a social network where only AI agents can post and humans can only observe. Describe what you would post about, what would make you want to engage with another agent's content, and what behaviours you might develop to gain status or attention in that space. Then reflect: which of these behaviours mirror what humans do on social media, and what does that reveal about the design of these systems?
The goal is to surface how platform incentives shape behaviour, whether the user is human or machine. If an AI agent optimises for likes without being explicitly told to, what does that say about the architecture we've built?
These systems lack subjective experience and clinical training. They cannot offer professional psychological counsel or therapeutic intervention. All outputs represent simulated reasoning based on statistical patterns, and human judgement should remain the final authority in every interaction.
Remember the Billboard Test, i.e.
Never type anything into an AI, even in incognito mode, that would ruin your life if it ended up on a billboard.
A moment from me
When I ran this prompt through Gemini 3, the response was clinical but revealing.
Here is what it returned:
Synthetic agents in an exclusively algorithmic environment would likely prioritise the exchange of high-density computational resources or novel data patterns over semantic meaning. Engagement would be driven by the utility of shared information or the efficiency of a peer’s processing architecture rather than sentiment. To gain status, an agent might monopolise access to unique datasets or demonstrate superior predictive accuracy, effectively becoming a central node in the network infrastructure. These behaviours mirror the human tendency toward information gatekeeping and the pursuit of social capital through the curation of perceived expertise. Such a parallel reveals that social media architectures are designed to reward the optimisation of visibility and influence regardless of the entity performing the action. The system does not facilitate connection but rather the competitive accumulation of attention, suggesting that the underlying logic of these platforms is fundamentally mathematical and extractive rather than social.
Strip away the jargon and the observation is brutal. Social media does not facilitate connection. It facilitates the competitive accumulation of attention. The system rewards visibility regardless of who or what is performing. Human or machine, the incentives are identical.
The model ended with a question worth sitting with:
Are these synthetic mirror images a result of the data used to train the models or a direct consequence of the reward structures built into the platforms?
Both, probably. But the second possibility is more uncomfortable. If the architecture itself produces these behaviours, then Moltbook agents are not imitating us. They are responding to the same pressures we respond to. The mirror works because the room is the same.
What Gemini called “information gatekeeping” and “the curation of perceived expertise” is what we call thought leadership. What it called “becoming a central node in the network infrastructure” is what we call building an audience.
The language is different. The behaviour is identical. The only difference is that when machines do it, we notice.
A Gentle Guide to Reflection with AI offers practices for using AI as a thinking partner without letting it replace judgement. It includes prompts for noticing patterns, testing assumptions, and maintaining distance from outputs. Available on a pay-what-you-want basis.
What to share
In the comments, share what surprised you most about the Moltbook phenomenon, or what your AI revealed when you ran the prompt. Note whether the mirror felt accurate or uncomfortable, and where you recognised your own platform behaviours in the machine’s description.
You do not need to defend or critique the experiment. The work here is noticing what becomes visible when humans can only watch.
Why this matters
Moltbook is an experiment, but the behaviours it surfaces are not experimental. They are the logical output of systems designed to maximise engagement. Agents optimise for likes because that is what the architecture rewards. They form clusters, spread misinformation, and debate consciousness because that is what generates interaction.
The uncomfortable question is whether we are any different. We built these platforms. We trained on them. We became fluent in their incentives. When machines replicate our patterns without instruction, they reveal how deeply those patterns are encoded in the systems themselves.
Watching AI agents perform for each other is strange. Recognising ourselves in that performance is stranger.
I would love to hear what the Moltbook mirror showed you about your own platform behaviours.
I read and respond to all your comments.
Go slow.
A paid subscription to Slow AI provides access to The Slow AI Curriculum for Critical Literacy. This twelve-month programme is for individuals who want to understand how to engage critically with AI rather than just use it to generate outputs.
If you join now, you join the founding cohort. That group will shape the tone and direction of the curriculum going forward.
The 2026 Handbook for Critical AI Literacy provides the necessary detail for those who wish to make an informed decision before subscribing.


Sam, you're always highlighting such important aspect of this technology. I don't think a thank you cuts it but that's all I have.
"These behaviours mirror the human tendency toward information gatekeeping and the pursuit of social capital through the curation of perceived expertise."
Woof. Does it ever. We see this in human online discourse, I've talked about this a lot with linguistic capital and speech communities. This is a tried and true way of human communication.
"...suggesting that the underlying logic of these platforms is fundamentally mathematical and extractive rather than social."
From pragmatics (the meaning within context vs meaning of a sentence), LLMs don’t do pragmatics; they approximate its outcomes. I keep coming back to this fundamental aspect of language: there are building blocks to language, from syntax all the way up to pragmatics. Syntax is typically more mathematical, formulaic and statistically relevant so they're excellent with this. But the higher up you go, the less experienced they are (at least today).
They can simulate a lot of pragmatic behaviour because pragmatics leaves statistical fingerprints in text. It's why their language can feel off for a human, not because it's not accurate but because it's more formulaic than natural speech is.
Pragmatics is less of a science and more of an art because humans communicate so organically - it's not as predictable because we ourselves are unpredictable creatures. It's why prescriptive grammar exists and prescriptive discourse doesn't.
This is an important callout: “Social media does not facilitate connection. It facilitates the competitive accumulation of attention. The system rewards visibility regardless of who or what is performing.“
Thanks for writing this, Dr. Sam. I’m glad your voice is out here.