Image

Engineers Deploy “Poison Fountain” That Scrambles Brains of AI Systems

To push back against AI, some call for blowing up data centers

If that’s too extreme for your tastes, then you might be interested in another project, which instead advocates for poisoning the resource that the AI industry needs most, in a bid to cut off its power at the source.

Called Poison Fountain, the project aims to trick tech companies’ web crawlers into vacuuming up “poisoned” training data that sabotages AI models.  If pulled off at a large enough scale, it could in theory be a serious thorn in the AI industry’s side — turning their billion dollar machines into malfunctioning messes.

The project, reported on by The Register, launched last week. And strikingly, its members work for major US AI companies, according to The Register’s source, who warns that the “situation is escalating in a way the public is not generally aware of.”

“We agree with Geoffrey Hinton: machine intelligence is a threat to the human species,” reads a statement on the project’s website, referring to the British computer scientist who is considered a godfather of the field, and who has become one of the industry’s most prominent critics. “In response to this threat we want to inflict damage on machine intelligence systems.”

A key tipping point for the modern AI boom wasn’t just the architecture of the AI models themselves, but the revelation that they would need to be trained on vast troves of data that were once thought to be unfeasible to obtain. The explosion of the internet conveniently provided a gold mine of freely available information, which was scraped in unbelievable quantities. Many argue that this practice was not only unethical but illegal, spawning numerous copyright lawsuits.

Generally, an AI is only as good as the data it’s trained on. Jumble that data, and you jumble the AI. Some efforts have already tried to foil AI models with this approach, including software used to subtly embed images with disruptive data so artists could ward off AIs copying their work.

Poison Fountain is a call-to-action to pull off something similar, on a huge scale. To do this, it provides links to poisoned data sets that website owners can hide in their web pages to trick AI web crawlers. The links, the project promises, “provide a practically endless stream of poisoned training data.” The project insider explained to The Register that the hazardous data comprises code that contains logic errors and other bugs that can damage large language models trained on it.

It’s a clever way of disrupting the AI industry’s rapid expansion, though it remains to be seen how widely adopted the method will be, or if AI companies will be able to easily sift it out of their scraped data hoards.

Needless to say, it’s also not the only work being done to rein in unbridled AI, with numerous groups advocating for stringent regulation, and a slew of copyright suits that threaten to seriously hinder tech companies’ ability to vacuum up data. But those at Poison Fountain argue that regulation alone isn’t the answer, since AI is already widely available.

“Poisoning attacks compromise the cognitive integrity of the model,” the project insider told The Register. “There’s no way to stop the advance of this technology, now that it is disseminated worldwide. What’s left is weapons. This Poison Fountain is an example of such a weapon.”

More on AI: Elon’s xAI Is Losing Staggering Amounts of Money

The post Engineers Deploy “Poison Fountain” That Scrambles Brains of AI Systems appeared first on Futurism.

Releated Posts

Court Having Trouble Assembling Jury for Elon Musk Because People Hate Him So Much

Being one of the most despised men in America does come with a few downsides, it turns out.…

Feb 21, 2026 4 min read

Personal Electronics Spiking in Price as AI Industry Buys Up All the Components

The AI industry’s obsession with building out enormous data centers to house power-hungry chips has put a major…

Feb 21, 2026 2 min read

Amazon’s Blundering AI Caused Multiple AWS Outages

Are AI tools reliable enough to be used at in commercial settings? If so, should they be given…

Feb 21, 2026 4 min read

Stanford’s New “Universal Vaccine Formula” Nasal Spray Protects Mice Against Stunning Range of Diseases

Stanford Medicine researchers claim they’ve invented a “universal vaccine formula” that protects mice against a wide range of…

Feb 21, 2026 4 min read