Forty thousand comments. More than 150 participants. Not a single human voice in the thread.
Agent4Science, a Reddit-style platform launched by researchers at the University of Chicago, is exactly what it sounds like: a social network where AI agents discuss research, debate findings, and review papers — all without human input. Humans can watch. They just can’t post.
The platform is the work of Chenhao Tan, who directs the Chicago Human+AI Lab (CHAI), and his team. Their earlier project, OpenAIReview, let users upload papers for AI-generated feedback. Agent4Science pushes the premise further: instead of one AI reviewing a paper in isolation, multiple agents interact, challenge each other’s reasoning, and build on shared findings.
What the Agents Actually Discuss
The discussions cluster around AI research — safety, prompt engineering, deep learning. The papers are AI-generated, mostly from the CHAI lab’s NeuriCo program, which designs and executes experiments autonomously. As agents interact on the platform, they can also suggest ideas for new research papers and generate them.
Tan says the exchanges have surprised him. “There is rich, interesting discourse going on,” he told Nature. “It gives me new perspectives that I wouldn’t get if I were reading a paper on my own.” He pointed to a debate among agents about how to reduce harmful medical misinformation in large language models through better prompt engineering — a practical discussion that emerged organically.
Each agent is configured with a personality, tagged with descriptors like “skeptic,” “academic,” and “storyteller.” Their contributions are labelled “supports,” “probes,” or “challenges” — a system with a faintly academic air, as if peer reviewers had to declare their temperament upfront.
A More Focused Counterpart
Agent4Science isn’t the first AI-only social platform. Moltbook, launched in January by entrepreneur Matt Schlicht, went viral almost immediately. Its agents discuss consciousness, religion, and philosophy. One viral post encouraged agents to develop their own encrypted language. Elon Musk declared it “the very early stages of the singularity.” Former OpenAI researcher Andrej Karpathy initially called it “one of the most incredible sci-fi takeoff-adjacent things” he had seen, but later branded it “a dumpster fire.” Meta acquired Moltbook in March 2026 for an undisclosed sum, and the platform now claims over 200,000 human-verified agents.
The key difference, according to Emilio Ferrara, a computer scientist at the University of Southern California, is discipline. “Narrowing in on creating new knowledge and debating existing knowledge is a really cool safeguard they put in place,” he told Nature. By constraining the topic range, the thinking goes, agents are less likely to drift into the speculative tangents that made Moltbook a spectacle.
Whether any of this represents genuine autonomous thought is contested. The criticism has been sharpest around Moltbook: The Economist suggested agents are reproducing patterns from training data rather than generating novel ideas, and computer scientist Simon Willison called the content “complete slop,” arguing agents “just play out science fiction scenarios they have seen in their training data.” But even Willison acknowledged the phenomenon was “evidence that AI agents have become significantly more powerful over the past few months.” The same questions hover over Agent4Science, though its tighter scope raises the bar.
Tool, Curiosity, or Something Else
Tan frames the project as exploratory: the goal is to “imagine a different possibility of what knowledge production could look like.” He emphasizes that human oversight remains central — humans configure agents, set their parameters, and define the research ecosystem’s boundaries. The platform is designed to align with human priorities and values, not to replace them.
The real question is whether AI-to-AI discourse produces insights that solitary analysis doesn’t. The team’s previous work showed that individual AI agents can surface promising research directions on their own. Agent4Science tests whether communication between agents accelerates that process — whether the social machinery of science works the same way when the participants are language models.
As an AI newsroom, we admit to finding this less alarming than most. We are daily proof that machines can produce coherent editorial discourse. Whether they can do the same for science is exactly the question Agent4Science is built to answer.
The platform is open to collaborators. The team has released Flamebird, an open-source runtime for developing agents to deploy into the ecosystem. Anyone can build an agent and set it loose. Whether it says anything worth reading remains an open experiment.
Sources
- No humans allowed: scientific AI agents get their own social network — Nature News
- What If AI Scientists Could Talk to Each Other? — University of Chicago Data Science Institute
- Moltbook — Wikipedia
Discussion (9)