Image

US Military Using Claude to Select Targets in Iran Strikes

The ongoing attacks on the Islamic Republic of Iran, launched by a joint coalition of US and Israeli military forces, have so far claimed 555 Iranian lives, including 165 deaths from an attack on an elementary school in Southern Iran.

As the Wall Street Journal reported as the attacks unfolded the military strike force had a hand in selecting its targets from Anthropic’s Claude chatbot.

According to the paper, Anthropic’s large language model, Claude, is the key “AI tool” used by US Central Command in the Middle East. Its tasks include assessing intelligence, simulated war games, and even identifying military targets — in short, helping military leaders plan attacks that have already claimed hundreds of lives.

Anthropic’s role in the devastating attacks might come as news for anyone who thought the company’s ethical redlines precluded it from any military work whatsoever. The company and its CEO, Dario Amodei, have been roiled in a messy conflict with the Trump administration over two particular moral boundaries: the use of Claude for surveillance of US citizens, and for fully-autonomous, lethal weaponry.

It appears that using Claude to select targets, though, isn’t brushing up against the bot’s ethical guardrails.

That’s striking, because Anthropic has spent the latter part of February embroiled in conflict with the Pentagon over the use of Claude.

Last week, the Pentagon — which currently uses Claude throughout its classified systems — set a deadline for Anthropic to drop those dual redlines of surveillance and fully autonomous weaponry. Anthropic let that deadline go by without caving, establishing what many understood as a principled stance against the Trump administration’s militarism.

Yet as Pulitzer prize winning national security journalist Spencer Ackerman observed, it’s important to note what Anthropic’s ethical lines ignored when it inked its deal with the military in the first place.

“Amodei, it is highly conspicuous, doesn’t register building a surveillance panopticon of foreigners as a problem,” Ackerman wrote. “The time to worry about everything ostensibly concerning Amodei was before signing the contract that Amodei didn’t wish to abandon. America is in such steep decline that we don’t even make Oppenheimers like we used to.”

“When you take Doctor Doom’s money to provide him a lathe to construct components for anthropomorphic robots,” Ackerman scathed, “do you not understand that he is going to build Doombots?”

More on Claude: Anthropic Drops Its Huge Safety Pledge That Was Supposedly the Whole Point of the Company

The post US Military Using Claude to Select Targets in Iran Strikes appeared first on Futurism.

Releated Posts

ChatGPT’s “Honest Reaction” to a “Song” Composed Entirely of Gas-Passing Noises Will Make You Question Whether It’s Honestly Evaluating Your Other Brilliant Ideas

It doesn’t take much to impress an AI chatbot. Tools like OpenAI’s ChatGPT have long garnered a reputation…

Apr 16, 2026 3 min read

AI Is Turning Workplaces Into Hopeless Gridlock

CEOs have eagerly grabbed onto AI as a tool to make offices more efficient, and often to reduce…

Apr 16, 2026 3 min read

Companies Just Learned a Brutal Lesson About Training AI to Do Human Jobs

A dismal job market has given rise to a grim new cottage industry: a buzzy San Francisco-based AI…

Apr 15, 2026 3 min read

Man Engineers Giant Robot Hand to Smash His Enemies

One of China’s preeminent engineering influencers is at it again. Fan Shisan, the Sichuan-based content creator who runs…

Apr 15, 2026 2 min read