Image

Government Insiders Concerned by Musk’s Erratic and Sycophantic Grok Being Deployed for Incredibly Sensitive Purposes

The Trump administration is scrambling to replace Claude, the chatbot embedded throughout the Pentagon’s entire scaffolding, with Elon Musk’s pet AI system, Grok.

On paper, xAI’s Grok makes sense: the AI model is already used in select parts of the Department of Defense, not to mention other parts of the federal government. Musk should also be deeply familiar with the contours of the federal government, given that he spent the better half of 2025 gnawing the wires out of its walls.

In practice, however, Grok also carries some deep flaws. It performs notably lower on AI benchmark tests than other leading models, and it’s garnered a rather infamous reputation for erratic, disgusting, and outrageous outbursts.

It’s also decidedly not the choice of federal insiders, who told the Wall Street Journal there are significant concerns about the safety and efficacy of Grok.

Per the WSJ, multiple officials said Grok is more susceptible to “data poisoning” than other AI systems, an issue where new information leads large language models to corrupt foundational training data. (As you might expect, this carries huge cybersecurity risks, especially for an entity like the Pentagon.)

Insiders, speaking anonymously, warned that these concerns went all the way up the chain to Ed Forst, head of the General Services Administration, the arm in charge of federal procurement. The GSA views Grok as both too sycophantic and too susceptible to manipulation, per the paper’s reporting.

Put it all together, and until Anthropic refused the Pentagon’s order to remove two key ethical guardrails, military officials heavily preferred Claude over Musk’s Grok.

“I do not believe they are peers in performance right now across all of the capabilities that matter to a customer like the Department of [Defence],” Gregory Allen, a senior AI adviser at the Center for Strategic and International Studies, told the WSJ.

Complicating matters for Trump and Hegseth, Sam Altman — the CEO of Anthropic’s bitter rival OpenAI — signaled this week that his company would hold a similar ethical “red line.”

So unless the Trump administration convinces Google or Microsoft to cross the line that Anthropic and OpenAI are upholding, the Pentagon’s stuck with Grok — consequences be damned.

More on Musk: Man Bet Entire Life Savings of $342,195.63 That Elon Musk Would Fail

The post Government Insiders Concerned by Musk’s Erratic and Sycophantic Grok Being Deployed for Incredibly Sensitive Purposes appeared first on Futurism.

Releated Posts

Trump’s Huge AI Data Center Project Is Falling Apart Behind the Scenes

Donald Trump’s bid to cash in on the AI data center boom seems to have sprung a leak…

Apr 24, 2026 3 min read

You’ll Spill Your Juice When You Learn How Many of Florida’s Orange Trees This Incurable Bacteria Has Already Infected

Whether you take it with pulp or without, you may soon be lucky just to get some OJ…

Apr 24, 2026 2 min read

Elon Musk Admits He Lied to Tesla Customers’ Faces for Years About Self-Driving

After over a decade of promising that fully autonomous driving is right around the corner, Tesla CEO Elon…

Apr 24, 2026 3 min read

FBI Investigating Series of Deaths Among Top Scientists With Very Specific Specialties

On February 27, retired Air Force Research Laboratory commander William Neil McCasland, who once worked at the Wright-Patterson…

Apr 23, 2026 4 min read