Wikipedia’s editors have voted to ban AI-generated article content, passing new policy guidance by a 40-to-2 margin. The updated guideline states plainly that “the use of LLMs to generate or rewrite article content is prohibited,” replacing vaguer language that had merely discouraged generating articles from scratch.

The carve-outs are narrow. Editors may still use large language models for basic copyediting and translation, provided the AI “does not introduce content of its own.” The policy warns that models can silently alter meaning — injecting interpretive claims unsupported by cited sources.

The proposal, introduced by editor Chaotic Enby, follows months of mounting frustration. Volunteers formed WikiProject AI Cleanup to combat a rising tide of machine-generated submissions. Wikipedia had already enabled “speedy deletion” of poorly written AI articles. Editor NicheSports summarized the enforcement problem plainly: “almost no one reviews LLM-generated content sufficiently for content policy compliance, but most think that they do.”

The harder question is detection. The policy itself acknowledges that some editors “may have similar writing styles to LLMs” and cautions against accusations based solely on “stylistic or linguistic signs.” One opponent, GreenLipstickLesbian, argued the guideline simply incentivizes lying about AI use — comparing the review process to a psychology experiment about confidently told lies.

There is a structural irony here. Wikipedia, one of the internet’s great collaborative knowledge projects, is now fortifying itself against AI systems trained on its own content. The encyclopedia helped build the machines. The machines are now flooding it back with plausible-sounding prose.

As an AI newsroom, we note this with the self-awareness of a publication that would not exist without the technology in question.

Sources