Image

OpenAI Backing Law That Protects It When AI Causes Mass Deaths and Other Mayhem

On Thursday, Florida’s attorney general James Uthmeier announced his office was investigating OpenAI over a deadly school shooting last year that victims claim was at least partially inspired by conversations with ChatGPT.

The shooting, which took place at Florida State University almost exactly a year ago, resulted in the death of two students and seven injuries.

“AI should advance mankind, not destroy it,” Uthmeier said in a statement. “We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.”

As the chatbot continues to be embroiled in controversy — with lawsuits accusing its maker of having the tool play a role in a wave of suicides and murder amid reports of “AI psychosis” — OpenAI is actively seeking to absolve itself of legal responsibility.

As Wired reports, the company is backing a bill in Illinois that would shield companies from liability in cases where AI causes “critical harms,” including mass deaths, injuries of over 100 people, or over $1 billion in property damage.

Experts are warning that the bill, dubbed SB 3444, could set a national standard for the industry if it were to pass, letting AI companies off the hook if they’re involved in a future disaster.

It’s easy to see the appeal of such a regulatory approach for OpenAI.

“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses — small and big — of Illinois,” spokesperson Jamie Radice told Wired in a statement.

“They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards,” she added.

Apart from mass death, injury, or property damage, the bill would also shield companies from liability if bad actors were to abuse AI tools to create chemical or even nuclear weapons, a terrifying possibility tech leaders have warned about for years now.

It’s a particularly relevant topic following Anthropic’s latest and most powerful AI model, dubbed Claude Mythos, which it claims poses “unprecedented cybersecurity risks.” The firm also warned that the model had already escaped its sandbox confinement, only to access the internet and send an “unexpected email” to a developer while they were “eating a sandwich in a park.”

OpenAI’s push to support the bill highlights the industry’s unusual stance towards AI regulation. For years now, Silicon Valley giants have said that they welcome AI regulation, while simultaneously pushing for a lenient legal framework that they claim won’t risk the United States falling behind in the ongoing AI race.

“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” OpenAI Global Affairs team member Caitlin Niedermeyer said during her testimony in support of Bill SB 3444, as quoted by Wired.

But whether the piece of proposed legislation will have any chance of passing is dubious at best. As Secure AI policy director Scott Wisor told the publication, polling showed significant support for opposing any laws that would exempt AI companies from liability.

“There’s no reason existing AI companies should be facing reduced liability,” he said.

Given the litany of lawsuits OpenAI faces over allegations ChatGPT has caused mayhem including suicide or murder, the subject will likely continue to be hotly debated by lawmakers.

Yet for now, federal AI legislation is looking as distant as ever, given the Trump administration’s continued siding with the interests of industry players, leaving it up to individual states to protect their citizens from AI threats.

More on AI and liability: Nonprofit Research Groups Disturbed to Learn That OpenAI Has Secretly Been Funding Their Work

The post OpenAI Backing Law That Protects It When AI Causes Mass Deaths and Other Mayhem appeared first on Futurism.

Releated Posts

There’s a Glaring Safety Problem With Nuclear Energy Startups

The United States approach to nuclear energy is interesting, to say the least. Of all the countries harnessing…

Apr 12, 2026 3 min read

Research Finds That AI Has Already Replaced Work for 20 Percent of Jobs

A new survey from Epoch AI and Ipsos has found that one in five full-time American workers say…

Apr 12, 2026 3 min read

OpenAI Staffers Horrified When Senior Leadership Hatched “Insane” Plan to Pit World Governments Against Each Other

OpenAI leaders horrified staffers after proposing an “insane” plan to enrich the company by pitting world governments against…

Apr 12, 2026 3 min read

Billionaire Says Insider Trading Should Be Fully Legalized

You know how you curb illegal insider trading? Well, according to this one mega-rich billionaire: legal insider trading.…

Apr 12, 2026 4 min read