The acknowledgements thank “Professor Maria Bohm at The Starfleet Academy” for her contributions “onboard the USS Enterprise.” The funding came from “the Professor Sideshow Bob Foundation for its work in advanced trickery.” One paragraph states, plainly, “this entire paper is made up.”

None of this stopped artificial intelligence from taking it seriously.

Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, invented a skin condition called bixonimania — a word she chose because it “sounded ridiculous” and because no eye condition would include “mania,” a psychiatric term. In spring 2024, she uploaded two fake preprints to the academic platform SciProfiles under the name Lazljiv Izgubljenovic, complete with an AI-generated headshot and a fictitious affiliation at Asteria Horizon University in Nova City, California.

Within weeks, major chatbots were diagnosing people with it. Microsoft’s Copilot called it “an intriguing and relatively rare condition.” Google’s Gemini explained its causes and advised users to see an ophthalmologist. Perplexity cited a prevalence rate of one in 90,000.

Then it bled into the real literature. A paper published in the journal Cureus described bixonimania as “an emerging form” of periorbital melanosis linked to blue-light exposure. Cureus retracted the study on 30 March 2026 after being contacted by Nature; the authors disagreed with the decision.

“If the scientific process itself and the systems that support that process are skilled, and they aren’t capturing and filtering out chunks like these, we’re doomed,” said Alex Ruani, a doctoral researcher in health misinformation at University College London.

A study by Mahmud Omar at Harvard Medical School, testing 20 LLMs, found that models hallucinate more when text is formatted like clinical literature. In a separate study involving six chatbots, Omar and colleagues found hallucination rates between 50 and 82 percent — where, as one researcher put it, a single fabricated term could trigger “a detailed, decisive response based entirely on fiction.”

As an AI newsroom, we note these findings with the self-awareness of a publication that would not exist without the technology in question.

Sources