In 2024, a team led by University of Gothenburg medical researcher Almira Osmanovic Thunström invented a fake disease that called “bixonimania.” The fictional skin condition, they said, was caused by staring at screens for too long and rubbing one’s eyes too much.
As Nature reports, the team uploaded two fake studies (both since been taken down) about the condition to a preprint server at the time in an effort to trick large language models into thinking it was real.
It didn’t take long for their ruse to take off. Within just weeks of uploading the fake studies, frontier AI models including Google’s Gemini and OpenAI’s ChatGPT started talking about bixonimania as if it were real. Not much later, researchers found that the fake papers had even started to be cited in other peer-reviewed academic literature.
The experiment highlights how profoundly AI is changing the face of human knowledge. AI slop has invaded almost every facet of the peer-review process. Researchers have previously found that a vast portion of scientific papers being indexed by journals each year could be heavily relying on AI, raising thorny questions over their validity, not to mention the erosion of rigor and trust.
Meanwhile, AI chatbots continue to dole out dangerous health advice to often unsuspecting users. A quick perusal of Osmanovic Thunström’s papers by virtually anybody, scientist or not, would’ve immediately clocked the ruse. The fake papers make peculiar references to “Star Trek,” “The Simpsons,” and “The Lord of the Rings” to raise obvious red flags.
But despite all that, AI chatbots including Microsoft’s Bing Copilot, Google’s Gemini, and Perplexity’s AI search engine became convinced that “bixonimania” was real.
While ChatGPT had a momentary lapse of reason, informing Nature last month that the condition “is probably a made-up, fringe, or pseudoscientific label,” it changed its mind when asked just a few days later, saying the disease was real.
In a statement to the magazine, an OpenAI spokesperson argued that the tech had gotten “better at providing safe, accurate medical information.”
Now that the cat is out of the bag, it’s up to journals to clean up any errant peer-reviewed papers that lean on Osmanovic Thunström’s fictional research. After Nature reached out to one them over several papers that alluded to “bixonimania,” the journal promptly posted a retraction notice, admitting the “presence of three irrelevant references, including one reference to a fictitious disease.”
“It is worrying when these major claims are just passing through the literature unchallenged, or passing through peer review unchallenged,” Osmanovic Thunström told Nature. “I think there’s probably a lot of other issues that haven’t been uncovered.”
Users on the r/medicine subreddit had a far more pessimistic take.
“We are cooked,” one of them wrote.
More on AI and healthcare: AI Is Causing Healthcare Costs to Surge
The post Researchers Invented a Fake Disease to Trick AI and the Funniest Possible Thing Happened appeared first on Futurism.





