Ai Industry Fears Creation Iupo03

AI Industry Insiders Living in Fear of What They’re Creating

They may be responsible for creating the AI tech that many fear will wipe out jobs — if not the entire human race — but at least they feel just as paranoid and miserable about where this is all going as the rest of us.

At NeurIPS, one of the big AI research conferences, held this year at the San Diego Convention, visions of AI doom were clearly on the mind of many scientists in attendance. But are they seriously reckoning with AI’s risks, or are they too busy doing what amounts to fantasizing about scenarios they’ve read in sci-fi novels? It’s the question raised in a new piece by Alex Reisner for The Atlantic, who attended NeurIPS and found that many spoke in grand terms about AI’s risks, especially those brought about the creation of a hypothetical artificial general intelligence, but overlooked the tech’s mundane drawbacks.

“Many AI developers are thinking about the technology’s most tangible problems while public conversations about AI — including those among the most prominent developers themselves — are dominated by imagined ones,” Reisner wrote.

One researcher guilty of this? University of Montreal researcher Yoshua Bengio, one the three so-called “godfathers” of AI whose work was foundational to creating the large language models propelling the industry’s indefatigable boom. Bengio has spent the past few years sounding the alarm about AI safety, and recently launched a non-profit called LawZero to encourage the tech’s safe development.

“Bengio was concerned that, in a possible dystopian future, AIs might deceive their creators and that ‘those who will have very powerful AIs could misuse it for political advantage, in terms of influencing public opinion,’” recalled Reisner.

But the luminary “did not mention how fake videos are already affecting public discourse,” Reisner observed. “Neither did he meaningfully address the burgeoning chatbot mental-health crisis, or the pillaging of the arts and humanities. The catastrophic harms, in his view, are ‘three to 10 or 20 years’ away.”

Reisner wasn’t the only one to observe this disconnect. In a keynote speech titled “Are We Having the Wrong Nightmares About AI?,” the sociologist Zeynep Tufekci warned that researchers were missing the forest for the trees by focusing so much on the risks posed by AGI, a technology that we don’t even know will ever be possible to create, and for which there is no agreed upon definition. After someone in the audience complained that the immediate risks Tufekci raised, like chatbot addiction, were already known, Tufekci responded, “I don’t really see these discussions. I keep seeing people discuss mass unemployment versus human extinction.”

It’s a far point to make. The discourse around AI safety is often dominated by apocalyptic rhetoric, which is peddled even by the very billionaires building the stuff. OpenAI CEO Sam Altman predicts that AI will wipe out entire categories of jobs, cause a crisis of widespread identity fraud, and admitted to doomsday prepping for when an AI system potentially retaliates against humankind by unleashing a deadly virus. 

And Bengio isn’t the only AI “godfather” wracked with contrition. British computer scientist Geoffrey Hinton — who received the Turing Award in 2018 alongside Bengio and former Meta chief AI scientist Yann LeCun — has cast himself as an Oppenheimer-like figure in the field. In 2023, he famously said he regretted his life’s work after quitting his role at Google, and recently held a discussion with senator Bernie Sanders where he went long on the tech’s myriad risks, which included jobs destruction and militarized AI systems furthering empire.

Reisner made an ironic observation: that the name of NeurIPS, short for “Neural Information Processing Systems,” harks back to a time when scientists vastly underestimated the complexity of our brain’s neurons and compared them to the processing done by computers.

“Regardless, a central feature of AI’s culture is an obsession with the idea that a computer is a mind,” he wrote. “Anthropic and OpenAI have published reports with language about chatbots being, respectively, ‘unfaithful’ and ‘dishonest.’ In the AI discourse, science fiction often defeats science.”

More on AI: Anthropic’s Chief Scientist Says We’re Rapidly Approaching the Moment That Could Doom Us All

The post AI Industry Insiders Living in Fear of What They’re Creating appeared first on Futurism.

Releated Posts

If You Thought Mark Zuckerberg Was a Pathetic Little Worm Before, Wait Until You Hear About His Latest Move

Meta CEO Mark Zuckerberg has long been the poster child for the billionaire class, a hyper-capitalist and unnervingly…

Apr 28, 2026 3 min read

OpenAI in Shambles as IPO Looms

OpenAI is still committed to a whopping $600 billion in AI infrastructure investments over the next four years,…

Apr 28, 2026 3 min read

A Tiny Town Is Building So Many Data Centers That There’ll Be Almost Nothing Else Left

Enticed by cheap land, abundant resources, and massive tax breaks, tech companies are gobbling up land in small,…

Apr 28, 2026 3 min read

Scientists Experimenting With Quantum Effect That Some Fear Could Cause Chain Reaction That Ends Entire Universe

In quantum physics, there’s a state with even less energy than a vacuum, called a true vacuum, which…

Apr 28, 2026 3 min read