Image

New England Journal of Medicine Retracts Paper Because Photo of Patient’s Insides Was Garbled by AI

Medical journals are being flooded with shoddy AI-generated work, a growing threat to the scientific community that could undermine the value and trustworthiness of potentially life-saving health research. Papers citing hallucinated journals and studies have quickly become a common fixture, raising major concerns among those tasked with weeding through a flood of new submissions.

In a high profile new gaffe, the reputable New England Journal of Medicine (NEJM) was forced to retract a paper by two Beijing-based researchers about a man in China developing “bronchial casts” in his lungs following a wildfire, after it was discovered that the authors had used an AI tool to manipulate a photograph in the piece.

The offending photo shows almost pitch-black, particle-filled bronchial tissues that were cryogenically removed from the patient’s lungs. As MedPage Today reported, an 87-year-old man had been brought to the emergency department at the Beijing Tsinghua Changgung Hospital after extensive fire smoke inhalation, requiring the removal of bronchial tissues that were entirely plugged with smoke particulate matter, an extremely dangerous obstruction of the airway. (MedPage later pointed out the retraction in an editor’s note.)

However, what appears to be a metric measuring tape above the tissues in the photo raises immediate red flags, with the numbers along the scale following a nonsensical sequence — a classic hallmark of the use of an unsophisticated AI image generator.

The authors said the slip-up was a careless accident.

In a retraction note, they wrote that “we were unaware of Journal policies on image manipulation and had altered our submission by using an artificial intelligence (AI) tool to move the ruler to the top of the image.”

“We therefore wish to retract our image and case report,” the note reads.

The blunder should give researchers pause. If simply moving a ruler results in this kind of AI-generated carnage, what other manipulations, both intentional or unintentional, are falling through the cracks?

Some users on social media also questioned the validity of the rest of the offending image, pointing out that there were too many segments of the senior patient’s lungs in the photo, raising the possibility that the image had been manipulated by AI in other ways.

Reached for comment, the authors provided a more detailed explanation of the snafu, but declined to send the original image for comparison:

The patient was in critical condition and receiving emergency rescue treatment. The ruler was placed at an inclined angle during the urgent clinical photography. We only adjusted the position of the ruler to make the image more aesthetic and visually readable, with no tampering with any other clinical image content.

All original clinical materials have already been provided to the NEJM editorial office, so it is not appropriate to send them to you again separately. You may consult the editorial department for specific details. Our official statement regarding the retraction was drafted following the journal’s suggestions.

We only adjusted the position of the ruler and did not modify the messy scale numbers at all. Our original intention was to ensure all information is completely authentic and traceable. It would have been very easy to use AI to make the ruler scale look perfectly standardized, but we did not do that. Throughout the entire process, we have always acted in good faith, kept all clinical information genuine, and ensured every detail is fully traceable.

We indeed did not have a full understanding of the journal’s relevant policies, which was our mistake. We sincerely apologize for this oversight.

The authors then added additional context:

We merely adjusted the angle and placement of the ruler. The scales of the ruler itself are accurate, and the disorder of the numbers was caused only by our position adjustment. It would have been very easy to fix those disordered numbers with AI, but we did not do so, in order to keep all data authentic and traceable. The ruler itself was not AI-generated.

In an appended editor’s note, the NEJM issued a stark reminder: “Authors are required to disclose any use of AI tools and any changes made to images.”

The journal’s editorial policies state that any use of “large language models, chatbots, or image creators” must be disclosed “at submission.”

“Authors should carefully review and edit all materials produced through the use of AI, to prevent the submission of authoritative-sounding output that is incorrect, incomplete, or biased,” the policy warns.

Meanwhile, editors across the scientific world are bracing themselves for an onslaught of slop.

Science’s increased vigilance against corruption of the literature has become one more component in science and scientific publishing’s relentless pursuit of the truth,” the journal Science wrote in a January editorial. “Publishing carefully edited papers subjected to the judgment of multiple humans — and the retraction and correction of papers when the humans involved make mistakes — has never been more important.”

More on AI slop and academics: Top Medical Journal Publishes Searing Article Warning Against Medical AI

The post New England Journal of Medicine Retracts Paper Because Photo of Patient’s Insides Was Garbled by AI appeared first on Futurism.

Releated Posts

Eric Trump’s Crypto Company Is Falling Into Total Disaster

President Donald Trump and his spawn have reaped billions of dollars worth of crypto during his second term…

May 1, 2026 3 min read

An Out of Control SpaceX Rocket Is Going to Smash Into Moon, Astronomer Says

One of Elon Musk’s spacecraft will finally reach the lunar surface — but probably not in the way…

May 1, 2026 3 min read

Gen Z Is Turning Against AI in an Incredible Way

For years now, tech leaders have warned that AI will usher in a technological revolution on an unprecedented…

May 1, 2026 3 min read

Elon Musk Just Got Badly Humiliated in Court

Elon Musk helped birth OpenAI in 2015, a world-changing AI non-profit which he lavished with tens of millions…

May 1, 2026 3 min read

Amazon’s New AI-Generated “Podcasts” Shilling Every Imaginable Products Are Already Backfiring Spectacularly

Companies keep forcing AI features to do things that no one ever thought they needed, or indeed ever…

May 1, 2026 3 min read

OpenAI Strangely Concerned About Goblins

OpenAI is forbidding its latest AI model from discussing an unlikely topic: goblins. As Wired reports, the company’s…

May 1, 2026 3 min read

If You Bet on Polymarket, This New Study May Cause You Physical Pain

In traditional gambling halls, it’s common knowledge that the house always wins. On newly popular prediction markets like…

Apr 30, 2026 2 min read

If OpenAI Loses This Trial, It Could Effectively Be Eliminated in Its Current Form

Little love has been lost between Elon Musk and Sam Altman. The two billionaires have been openly feuding…

Apr 30, 2026 5 min read

How to Get Rid of Reddit’s Giant App-Shilling Popup That Breaks Its Entire Mobile Site

Earlier today, we reported on an incredibly irritating move by the social giant Reddit. Though the company has…

Apr 30, 2026 3 min read