Image

Grok Is Being Used to Depict Horrific Violence Against Real Women

Earlier this week, a troubling trend emerged on X-formerly-Twitter as people started asking Elon Musk’s chatbot Grok to unclothe images of real people. This resulted in a wave of nonconsensual pornographic images flooding the largely unmoderated social media site, with some of the sexualized images even depicting minors.

In addition to the sexual imagery of underage girls, the women depicted in Grok-generated nonconsensual porn range from some who appear to be private citizens to a slew of celebrities, from famous actresses to the First Lady of the United States. And somehow, that was only the tip of the iceberg.

When we dug through this content, we noticed another stomach-churning variation of the trend: Grok, at the request of users, altering images to depict real women being sexually abused, humiliated, hurt, and even killed.

Much of this material was directed at online models and sex workers, who already face a disproportionately high risk of violence and homicide.

One of the disturbing Grok-generated images we reviewed depicted a widely-followed model restrained in the trunk of a vehicle, sitting on a blue tarp next to shovel — insinuating that she was on her way to being murdered.

Other AI images involved people specifically asking Grok to put women in scenarios where they were obviously being assaulted, which was made clear by users requesting that the chatbot make the women “look scared.” Some users asked for humiliating phrases to be written on women’s bodies, while others asked Grok to give women visible injuries like black eyes and bruises. Many Grok-generated images involved women being put into restraints against their will. At least one user asked Grok to create incestuous pornography, to which the chatbot readily complied.

That a social media-infused chatbot could so readily transform into a nonconsensual porn machine to create unwanted and even violent images of real women at scale is, on its face, deeply unsettling. Even worse was that the creators of these images often seemed to be treating the action like a game or meme, with an air of laughter and detachment.

That nonchalance may speak to a normalization of this kind of nonconsensual content, which before had largely been relegated to darker corners if the internet. Women and girls, meanwhile, continue to face the real-world harm wrought by nonconsensual deepfakes, which are easier than ever to generate thanks to AI-powered “nudify” tools — and, apparently, multibillion-dollar chatbots.

We’ve reached out to xAI for comment, but haven’t received any reply.

But yesterday, Musk, who owns both X and xAI, took to the social media platform to ask netizens to “please help us make Grok as perfect as possible.”

“Your support,” he added, “is much appreciated.”

More on Grok and safety: Elon Musk’s Grok Is Providing Extremely Detailed and Creepy Instructions for Stalking

The post Grok Is Being Used to Depict Horrific Violence Against Real Women appeared first on Futurism.

Releated Posts

Professor Says Her Garbled AI Textbook Was a Huge Success

The professor behind an AI-generated textbook says that her error-ridden experiment was actually a resounding success. Designed for…

Feb 7, 2026 4 min read

An Analysis Just Found Something Extremely Unflattering About What Happens to Users of Prediction Markets

After an analyst found that users are losing money faster on sites like Polymarket than on traditional sports…

Feb 7, 2026 4 min read

Novo Nordisk Furious at $49 Knockoff Ozempic Pill

Novo Nordisk conquered the world with its borderline miraculous drug semaglutide, sold as Ozempic and Wegovy. But with…

Feb 7, 2026 4 min read

Investors Concerned AI Bubble Is Finally Popping

For quite some time now, investors have fought the suggestion that the artificial intelligence industry may be forming…

Feb 6, 2026 3 min read