Image

OpenAI Flagged a Mass Shooter’s Troubling Conversations With ChatGPT Before the Incident, Decided Not to Warn Police

A grim scoop from the Wall Street Journal: an automated review system at OpenAI flagged disturbing conversations that a future mass shooter was having with the company’s flagship AI ChatGPT — but, despite being urged by employees at the company to warn law enforcement, OpenAI leadership opted not to.

The 18-year-old Jesse Van Rootselaar ultimately killed eight people including herself and injured 25 more in British Columbia earlier this month, in a tragedy that shook Canada and the world. What we didn’t know until today is that employees at OpenAI had already been aware of Van Rootselaar for months, and had debated alerting authorities because of the alarming nature of her conversations with ChatGPT.

In the conversations with OpenAI’s chatbot, according to sources at the company who spoke to the WSJ, Van Rootselaar “described scenarios involving gun violence.” The sources say they recommended that the company warn authorities local authorities, but that leadership at the company decided against it.

An OpenAI spokesperson didn’t dispute those claims, telling the newspaper that it banned Van Rootselaar’s account, but decided that her interactions with ChatGPT didn’t meet its internal criteria for escalating a concern with a user to police.

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” the company said in a statement to the paper. The spokesperson also said that the company had reached out to assist Canadian police after the shooting took place.

We’ve known since last year that OpenAI is scanning users’ conversations for signs that they’re planning a violent crime, though it’s not clear whether it’s yet successfully headed off an incident before it happened.

Its decision to engage in that monitoring in the first place reflects an increasingly long list of incidents in which ChatGPT users have fallen into severe mental health crises after becoming obsessed with the bot, sometimes resulting in involuntary commitment or jail — as well as a growing number of suicides and murders, leading to numerous lawsuits.

In a sense, questions of how to deal with threatening online conduct is a longstanding question that every social platform has grappled with. But AI brings difficult new questions to the topic, since chatbots can engage with users directly — sometimes even encouraging bad bad behavior or otherwise behaving inappropriately.

Like many mass shooters, Van Rootselaar left behind a complicated digital legacy — including on Roblox — that investigators are still wading through.

More on OpenAI: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking

The post OpenAI Flagged a Mass Shooter’s Troubling Conversations With ChatGPT Before the Incident, Decided Not to Warn Police appeared first on Futurism.

Releated Posts

Waymos Are a Huge Drain on Public Resources, Government Data Shows

In their 2022 book “The Road to Nowhere,” tech critic Paris Marx considered the implications of the death…

Mar 11, 2026 4 min read

FCC Deciding Whether to Allow Startup to Launch Huge Mirror Satellite to Blast Sunlight on Cities at Nighttime

The US Federal Communication Commission is reviewing an application to launch and deploy a massive mirror satellite in…

Mar 10, 2026 4 min read

Body Horror Robot Turns Human Into Centaur

We’ve seen plenty of robot appendages designed to decrease exertion, from futuristic exoskeletons that can allow you to…

Mar 10, 2026 4 min read

Insiders Afraid the Government Will Nationalize the AI Industry

Depending on who you ask, AI was the financial growth story of 2025. In the first nine months…

Mar 10, 2026 3 min read

YouTube Filling With Horrifying AI Slop for Children

In an age when more and more young children are hooked on digital devices, YouTube is bombarding them…

Mar 10, 2026 5 min read

Top OpenAI Executive Quits in Protest

A top OpenAI executive has quit the company over its agreement with the Department of Defense that allows…

Mar 9, 2026 4 min read

Looks Like the Economy Is Hitting a “Nightmare Scenario”

Tankers in the Strait of Hormuz have ground to a halt, cutting off much of the world from…

Mar 9, 2026 3 min read

Xiaomi Now Using Humanoid Robots to Assemble Electric Cars

There’s a new “intern” on the assembly line, and it won’t be pausing every hour to take a…

Mar 9, 2026 3 min read

The Supreme Court Just Dealt a Crushing Blow to “AI Artists”

Proponents of generative AI say the tech has greatly lowered the barriers of entry in the art world,…

Mar 9, 2026 3 min read