Image

Anthropic Researcher Quits in Cryptic Public Letter

An Anthropic researcher just announced his resignation in a cryptic and poetry-laden letter warning of a world “in peril.”

The employee, Mrinank Sharma, had led the Claude chatbot maker’s Safeguards Research Team since it was formed early last year and has been at the company since 2023.  While at the company, Sharma said he explored the causes of AI sycophancy, developed defenses against “AI-assisted bioterrorism,” and wrote “one of the first AI safety cases.”

But on Monday, he posted that it would be his last day at the company, uploading a copy of a letter he shared with colleagues. It‘s painfully devoid of specifics, but hints at at some internal tensions over the tech’s safety. 

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma said, claiming that employees “constantly face pressures to set aside what matters most.”

He also issued a crypic warning about the global state of affairs.

“I continuously find myself reckoning with our situation The world is in peril. And not just from AI, or bioweapons,” he wrote, “but from a whole series of interconnected crises unfolding in this very moment.”

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” he continued, in no less vague terms.

The resignation comes after Anthropic’s newly released Claude Cowork model helped kick off a stock market nosedive over fears that its new plugins could upend massive software customers and automate some white collar jobs, especially in legal roles.

Amid the selloff, The Telegraph reported that employees privately fretted over their own AI’s potential to hollow out the labor market.

“It kind of feels like I’m coming to work every day to put myself out of a job,” one staffer said in an internal survey. “In the long term, I think AI will end up doing everything and make me and many others irrelevant,” another confided.

High profile resignations, including ones over safety issues, aren’t uncommon in the AI industry. A former member of OpenAI’s now-defunct “Superalignment” team announced he was quitting after realizing the company was “prioritizing getting out newer, shinier products” over user safety.

It’s also not uncommon that these resignations are self-exonerating advertisements for the departing employee, or perhaps the new startup they’re launching or joining, where they vow to be safer than ever. Allude to enough issues, or drop in enough loaded hints, and there’s a good chance your calling it quits will generate a few headlines. 

Others leave quietly, such as former OpenAI economics researcher Tom Cunningham. Before quitting the company, he shared an internal message accusing OpenAI of turning his research team into a propaganda arm and discouraging publishing research critical of AI’s negative effects.

If Sharma has an angle here with his resignation, it doesn’t seem all that related to the industry he worked in. “I hope to explore a poetry degree and devote myself to the practice of courageous speech,” he wrote. In the footnotes, he cited a book that advocates for a new school of philosophy called “CosmoErotic Humanism.” Its listed author, David J Temple, is a collective pseudonym used by several authors including by Marc Gafni, a disgraced New Age spiritual guru who’s been accused of sexually exploiting his followers.

More on AI: Anthropic Insiders Afraid They’ve Crossed a Line

The post Anthropic Researcher Quits in Cryptic Public Letter appeared first on Futurism.

Releated Posts

CEOs Say Yeah, AI Might Be a Bubble, But They’re Gonna Keep Shoveling Money Into the Furnace Because All Their Friends Are

A new survey by accounting firm KPMG US found a contradiction in how CEOs are thinking about AI:…

Mar 12, 2026 2 min read

Grammarly Is Pulling Down Its Explosively Controversial Feature That Impersonates Writers Without Their Permission

Grammarly infuriated journalists, authors, and academics with its “Expert Review” feature, which impersonated writers — both dead and…

Mar 11, 2026 3 min read

The AI-Generated Tilly Norwood Just Dropped the Worst Music Video We’ve Ever Seen

Late last year, video production company Particle6 triggered near-universal backlash when it unveiled its so-called “AI actress” dubbed…

Mar 11, 2026 4 min read

US Military Tested Havana Syndrome Weapon on Large Mammals, Whistleblowers Says

Sprawling revelations about so-called Havana Syndrome show no signs of going away. Rumors of the alleged neurological condition…

Mar 11, 2026 3 min read