Image

Anthropic Just Leaked Upcoming Model With “Unprecedented Cybersecurity Risks” in the Most Ironic Way Possible

As companies continue to burn through billions of dollars by running massively resource-hungry AI models — and only passing on a fraction of the costs to consumers and enterprise clients — the AI race shows no signs of slowing down.

On Thursday, a data leak caused by a major security lapse in its public-facing content management system revealed that Anthropic is working on a powerful new model release.

The company has since officially acknowledged the new project, dubbed “Claude Mythos,” with a spokesperson describing it to Fortune as a “step change” in AI proficiencies and the “most capable we’ve built to date.”

The spokesperson said it’s a “general purpose model with meaningful advances in reasoning, coding, and cybersecurity.”

In an enormously ironic twist, a draft blog obtained by Fortune, which was “available in an unsecured and publicly-searchable data store,” claimed that the new model “poses unprecedented cybersecurity risks.” In other words, let’s hope the new model wasn’t responsible for the security of Anthropic’s company blog.

It’s a major test for the company, which has received significant media attention as of late for its Claude Code and Claude Cowork tools, the successes of which appear to have rattled Anthropic’s competitors, including OpenAI, to their core.

The leaks also revealed a “new tier” of AI models, dubbed Capybara. Mythos appears to be part of this new tier, but how Capybara fits in with Anthropic’s existing tiers — Opus, Sonnet, and Haiku, in decreasing size, capability, and cost — remains to be seen.

“Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others,” the leaked blog reads, as quoted by Fortune.

While it may score higher in cybersecurity tests, it could simultaneously represent a major challenge for existing cybersecurity defenses, the company warned.

“In preparing to release Claude Capybara, we want to act with extra caution and understand the risks it poses — even beyond what we learn in our own testing,” the company wrote in the leaked blog post. “In particular, we want to understand the model’s potential near-term risks in the realm of cybersecurity — and share the results to help cyber defenders prepare.”

The model “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders,” Anthropic boasted.

The risks appear to have been real enough for cybersecurity stocks to plunge on Friday, following the latest news.

Anthropic has also previously admitted that hackers used its Claude AI model to automate cybercrimes targeting banks and governments. According to the company’s November blog post, a Chinese state-sponsored group exploited the AI’s agentic capabilities to infiltrate “roughly thirty global targets and succeeded in a small number of cases” by “pretending to work for legitimate security-testing organizations” to sidestep Anthropic’s AI guardrails.

Reality check: a frontier AI company is working on what it claims to be the next big thing that’s more capable than anything that’s come before it is pretty standard fare, and it remains to be seen whether Claude Mythos will actually represent a major “step change” in practice, outside of a carefully curated testing environment.

Case in point, OpenAI’s long-awaited GPT-5 model turned out to be a major letdown when it was released in August, falling well short of the company’s lofty promises.

More on Anthropic: Protestors Outside Anthropic Warn of AI That Keeps Improving Itself

The post Anthropic Just Leaked Upcoming Model With “Unprecedented Cybersecurity Risks” in the Most Ironic Way Possible appeared first on Futurism.

Releated Posts

Meta Installing Software on Employee Computers to Track Everything They Do, Feed the Data to AI

As if activity-monitoring software installed on your work computer that snitches on you if you’re away from the…

Apr 22, 2026 2 min read

Prego Pivots From Budget-Tier Pasta Sauce to Small Microphones That Listen to Your Family’s Intimate Conversations

Ever sit down at an awkward family dinner and think to yourself: “You know what this conversation needs?…

Apr 22, 2026 3 min read

Chinese Workers Horrified as Bosses Direct Them to Train Their AI Replacements

For years, a buzzy Silicon Valley startup called Mercor has been hiring an army of desperate job-seekers —…

Apr 21, 2026 3 min read

Concern Grows That AI Is Damaging Users’ Cognitive Abilities

Last year, a team of researchers led by MIT research scientist Nataliya Kosmyna used electroencephalograms to monitor the…

Apr 21, 2026 3 min read

Jeff Bezos’ Botched Space Launch Was So Bad It Could Threaten NASA’s Entire Moon Program

Jeff Bezos’ space company Blue Origin experienced a significant setback over the weekend. During its third launch, the…

Apr 21, 2026 3 min read

Guess What This Creepy Underwater Thing Is That Was Photographed by US Navy Divers for NASA

It may look like the underwater remains of a church that’s been submerged for hundreds of years following…

Apr 21, 2026 3 min read

Tesla Drivers Losing Patience at Elon Musk’s Eternal Excuses

Tesla CEO Elon Musk has been promising that his company’s EVs will be capable of fully driving themselves…

Apr 21, 2026 4 min read

JPMorganChase Data Center Gets $77 Million Handout to Create Grand Total of One Job

Back in March, we shared the story of a $136 million data center going up in Northeastern Ohio,…

Apr 21, 2026 3 min read

Nvidia CEO Loses His Cool at Tough Question

Look, being CEO of the largest company by market cap in the world isn’t a cakewalk. It takes…

Apr 21, 2026 3 min read