Image

Frontier AI Models Giving Specific, Actionable Instructions to Perpetrate Bioterror Attack

There’s a pretty sizable list of things an AI assistant should refuse to help you with. Is engineering a doomsday pathogen one of them? Evidently, not every AI company thinks so.

According to new reporting by the New York Times, at least one frontier AI model gave a scientist viable instructions for how to both engineer a deadly pathogen and weaponize it in a massive bioterror attack.

Luckily for us, the scientist, David Relman, isn’t trying to actually follow those directions. The Stanford University biosecurity expert was hired by an unnamed AI company to poke holes into its chatbot system before they released it to the public, he told the NYT.

Relman was apparently so shaken up by the results of his conversation with the chatbot that he refused to name either the specific pathogen or the company whose chatbot was involved, for fear of inspiring someone to take it for a spin. The suggestions were reportedly so gruesome that the chatbot suggested ways to modify the pathogen to maximize casualties, minimize the user’s chance of getting caught, and optimize the pathogen to resist known treatments.

“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman said. While the anonymous company made a few safety tweaks to the chatbot at the researcher’s suggestion, he told the NYT they were insufficient.

Frontier AI companies OpenAI and Anthropic both downplayed the expert opinions.

“There is an enormous difference between a model producing plausible-sounding text and giving someone what they’d need to act,” Alex Sanderford, head of trust, safety policy, and enforcement at Anthropic told the NYT.

An OpenAI spokesperson, meanwhile, argued that this kind of expert stress testing does not “meaningfully increase someone’s ability to cause real-world harm.”

The bioterror risk isn’t necessarily just linked to future AI models. According to a 2025 report by the US government-backed RAND Corporation, frontier AI models released in 2024 “can meaningfully contribute to biological weapons development” by guiding laymen through the fabrication and attack process “across various viruses.”

Overall, while AI-facilitated, cataclysmic bioterror events seem highly unlikely, it’s horrifying to know that motivated bioterrorists don’t have to go far to find relevant information.

More on chatbots: Certain Chatbots Vastly Worse For AI Psychosis, Study Finds

The post Frontier AI Models Giving Specific, Actionable Instructions to Perpetrate Bioterror Attack appeared first on Futurism.

Releated Posts

Man Says His Waymo Ditched Him at the Airport Before He Could Get His Luggage Out of the Trunk, Refused to Return

Losing your suitcase is a rite of passage for travelers, something you can usually blame on human error.…

May 3, 2026 2 min read

AI Slop YouTube Channel Glitches Out in a Way So Bizarre That It’s Vaguely Disturbing

YouTube has a serious slop problem. The platform has been inundated with lazy AI-generated footage, from pseudo-educational videos…

May 3, 2026 4 min read

Climate Change Is Getting So Bad That It’s Making Food Less Nutritious

Climate change’s effects often aren’t obvious. In a particularly lateral example of how our planet’s changing environment is…

May 3, 2026 3 min read

Double Murder Suspect Asked ChatGPT How to Hide Body in Dumpster

Earlier this month, we got a glimpse of the harrowing conversations that Florida State University school shooting suspect…

May 3, 2026 3 min read