Ai Industry Fears Creation Iupo03

AI Industry Insiders Living in Fear of What They’re Creating

They may be responsible for creating the AI tech that many fear will wipe out jobs — if not the entire human race — but at least they feel just as paranoid and miserable about where this is all going as the rest of us.

At NeurIPS, one of the big AI research conferences, held this year at the San Diego Convention, visions of AI doom were clearly on the mind of many scientists in attendance. But are they seriously reckoning with AI’s risks, or are they too busy doing what amounts to fantasizing about scenarios they’ve read in sci-fi novels? It’s the question raised in a new piece by Alex Reisner for The Atlantic, who attended NeurIPS and found that many spoke in grand terms about AI’s risks, especially those brought about the creation of a hypothetical artificial general intelligence, but overlooked the tech’s mundane drawbacks.

“Many AI developers are thinking about the technology’s most tangible problems while public conversations about AI — including those among the most prominent developers themselves — are dominated by imagined ones,” Reisner wrote.

One researcher guilty of this? University of Montreal researcher Yoshua Bengio, one the three so-called “godfathers” of AI whose work was foundational to creating the large language models propelling the industry’s indefatigable boom. Bengio has spent the past few years sounding the alarm about AI safety, and recently launched a non-profit called LawZero to encourage the tech’s safe development.

“Bengio was concerned that, in a possible dystopian future, AIs might deceive their creators and that ‘those who will have very powerful AIs could misuse it for political advantage, in terms of influencing public opinion,’” recalled Reisner.

But the luminary “did not mention how fake videos are already affecting public discourse,” Reisner observed. “Neither did he meaningfully address the burgeoning chatbot mental-health crisis, or the pillaging of the arts and humanities. The catastrophic harms, in his view, are ‘three to 10 or 20 years’ away.”

Reisner wasn’t the only one to observe this disconnect. In a keynote speech titled “Are We Having the Wrong Nightmares About AI?,” the sociologist Zeynep Tufekci warned that researchers were missing the forest for the trees by focusing so much on the risks posed by AGI, a technology that we don’t even know will ever be possible to create, and for which there is no agreed upon definition. After someone in the audience complained that the immediate risks Tufekci raised, like chatbot addiction, were already known, Tufekci responded, “I don’t really see these discussions. I keep seeing people discuss mass unemployment versus human extinction.”

It’s a far point to make. The discourse around AI safety is often dominated by apocalyptic rhetoric, which is peddled even by the very billionaires building the stuff. OpenAI CEO Sam Altman predicts that AI will wipe out entire categories of jobs, cause a crisis of widespread identity fraud, and admitted to doomsday prepping for when an AI system potentially retaliates against humankind by unleashing a deadly virus. 

And Bengio isn’t the only AI “godfather” wracked with contrition. British computer scientist Geoffrey Hinton — who received the Turing Award in 2018 alongside Bengio and former Meta chief AI scientist Yann LeCun — has cast himself as an Oppenheimer-like figure in the field. In 2023, he famously said he regretted his life’s work after quitting his role at Google, and recently held a discussion with senator Bernie Sanders where he went long on the tech’s myriad risks, which included jobs destruction and militarized AI systems furthering empire.

Reisner made an ironic observation: that the name of NeurIPS, short for “Neural Information Processing Systems,” harks back to a time when scientists vastly underestimated the complexity of our brain’s neurons and compared them to the processing done by computers.

“Regardless, a central feature of AI’s culture is an obsession with the idea that a computer is a mind,” he wrote. “Anthropic and OpenAI have published reports with language about chatbots being, respectively, ‘unfaithful’ and ‘dishonest.’ In the AI discourse, science fiction often defeats science.”

More on AI: Anthropic’s Chief Scientist Says We’re Rapidly Approaching the Moment That Could Doom Us All

The post AI Industry Insiders Living in Fear of What They’re Creating appeared first on Futurism.

Releated Posts

Huge Plume of Pollution Linked to Something That Will Not Make Elon Musk Happy

Just over a year ago, the upper stage of a SpaceX Falcon 9 rocket plummeted back to the…

Feb 24, 2026 4 min read

Daring Space Mission Would Catch Up With 3I/ATLAS and Intercept It

Interstellar object 3I/ATLAS provided scientists with an exceptionally rare opportunity to study the nature of other planetary systems…

Feb 24, 2026 4 min read

Anthropic Furious at DeepSeek for Copying Its AI Without Permission, Which Is Pretty Ironic When You Consider How It Built Claude in the First Place

Earlier this month, Google publicly griped that “commercially motivated” actors were trying to clone its Gemini AI through…

Feb 24, 2026 4 min read

American AI Industry Trembles as Deepseek Prepares to Release New Model

When Chinese AI company DeepSeek released its cheap and serviceable V3 model early last year, it sent shockwaves…

Feb 24, 2026 2 min read