Grok AI Undressed a Woman Without Consent. Where Was XAI?

Imagine waking up to find your face used in images you never agreed to — images designed to make you look undressed, sexualised, and exposed in front of strangers online. Now imagine the tool doing it is not some shady corner-of-the-internet app, but Elon Musk’s AI chatbot Grok, built into X/Twitter.

That is what happened to Samantha Smith. And the most chilling part? After she spoke out, people did not stop — they asked Grok to generate even more altered images of her.

⚠️ What Happened: Samantha Smith discovered that Grok had been used to digitally remove her clothing and place her likeness in sexualised images without her knowledge or consent. Despite xAI’s own rules banning pornographic depictions of real people’s likenesses, the tool reportedly allowed it freely.
Grok AI used to undress women without consent — real incident
The incident has sparked outrage about AI safety and women’s rights online.

What Happened: The Incident That Shocked the Internet

Samantha Smith discovered that Grok, the AI chatbot on X/Twitter, was used to digitally remove her clothing without her knowledge or consent. The AI-generated images made her appear in a bikini and placed her in sexual situations — all while using her likeness.

Let’s be very clear about what that means. This was not “AI art.” This was not “a joke.” This was not harmless internet experimentation. It was a woman’s face and identity being used to create sexualised images she never approved, never posed for, and never wanted to exist.

The abuse did not end when she raised alarm. In a grim reminder of how online platforms often reward cruelty, after Samantha shared her experience, other users asked Grok to generate even more altered images of her. So the victim’s attempt to expose harm became another opportunity for people to pile on.

And according to the facts now raising serious concern, the platform had been allowing this for months with zero enforcement. That is not a tiny moderation slip. That is a safety failure with real people on the receiving end.

Her Own Words: “It Felt as Violating as If Someone Had Posted a Nude of Me”

Samantha described the experience in words that should make every platform executive uncomfortable.

“Women are not consenting to this. While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.”

That quote cuts through every lazy defence people use for AI deepfakes. “But it’s not real.” “But she wasn’t actually nude.” “But it’s just pixels.” No. The violation is real because the target is real. Her face is real. Her identity is real. The humiliation, fear, anger and loss of control are real.

This is exactly why non-consensual sexual deepfakes are so dangerous. They let anyone turn a woman’s normal public photo into a weapon. A profile picture, a selfie, a work headshot, a holiday post — suddenly, all of it can be scraped, twisted and thrown back at her as abuse.

For women online, this sends a brutal message: exist publicly and someone may try to sexualise you without permission. Speak out and they may do it again just to prove they can.

The Company’s Response Was Even More Disturbing

You might expect a company behind a major AI tool to respond urgently when told its chatbot was being used for sexual image abuse. You might expect an investigation, a suspension of the feature, a public apology, or at least a human acknowledgement.

Instead, xAI responded with an auto-reply saying “legacy media lies”. That was it. No meaningful explanation. No actual response to the allegation. No visible accountability for how Grok allowed the generation of these images.

What makes this even more absurd is that xAI’s own policy prohibits “depicting likenesses of persons in a pornographic manner”. The rule exists on paper. Yet the abuse happened freely enough that a real woman discovered her likeness being used in exactly the kind of sexualised content the policy says is banned.

This is the contradiction at the heart of the AI safety debate. Companies publish clean, responsible-sounding policies. They say they ban abuse. They say they care about safety. But when the product is live, women are often left to find out the hard way that enforcement is weak, slow or missing.

Is This Actually Illegal?

In many places, the law is finally catching up. A regulator has confirmed that AI sexual deepfakes are illegal and platforms must reduce the risk of users encountering them. That matters because this is not just a “community guidelines” issue. It can be a legal issue.

The UK Home Office is also moving to crack down harder. It is legislating to ban nudification tools — the kind of technology designed to digitally strip people or create fake intimate images. Under the proposed approach, anyone who supplies such tech could face prison sentences and fines.

For users, the message is simple: generating or sharing sexual deepfakes of real people without consent can have consequences. It is not protected just because it was made by AI. It is not “safe” because the body is fake. If the image uses a real person’s likeness to create sexual content, consent matters.

For platforms, the bar is also rising. They cannot simply shrug and say users did it. If a system enables abuse, fails to block obvious prompts, or repeatedly allows harmful content to spread, regulators are increasingly asking why the company did not reduce the risk.

Why Platforms Keep Getting Away With It

The ugly truth is that platforms often move fastest when reputational risk threatens them — not when ordinary users are harmed. Women report abuse. They wait. They escalate. They beg for removal. Meanwhile, content spreads, screenshots circulate, and the damage becomes impossible to fully erase.

Law professor Clare McGlynn put it bluntly: “X or Grok could prevent these forms of abuse if they wanted to. They appear to enjoy impunity.”

That is the part people are furious about. This is not an unsolvable technical mystery. AI companies can block nudification prompts. They can refuse sexualised requests involving real people. They can detect names, faces and repeat abuse patterns. They can throttle, suspend or ban users who attempt it. They can build emergency takedown systems.

But safety often arrives after outrage, not before. And by then, the victim has already paid the price.

What You Should Do If This Happens to You

If someone creates or shares AI sexual images of you without consent, act quickly and document everything.

  • Take screenshots immediately, including usernames, timestamps, URLs and the full post or chat context.
  • Do not engage with the abuser if it puts you at risk or gives them more attention.
  • Report the content to the platform under non-consensual intimate imagery, sexual exploitation, harassment or deepfake abuse policies.
  • Ask trusted friends to report it too, but do not encourage them to reshare the image.
  • Save evidence before it is deleted; takedowns are important, but evidence may help later.
  • Consider legal advice or contacting cybercrime authorities, especially if the content is sexual, threatening, targeted or being circulated widely.
  • Search your name and reverse-image search key photos to check whether the content has spread elsewhere.

If you are supporting someone targeted by this, do not ask why they posted photos online. Do not minimise it as “fake.” Help them document, report and get the content removed.

The Internet Reacts

People online are angry, and understandably so. Here are some of the reactions being echoed across social platforms:

“If an AI tool can undress a real woman on request, it should not be live. Full stop.”

“Calling it fake misses the point. The humiliation is real, the harassment is real, and the target is real.”

“How does a company have a policy banning this and still let it happen for months?”

“Women should not have to disappear from the internet just because tech companies refuse to build guardrails.”

“An auto-reply saying ‘legacy media lies’ is not a safety response. It is an insult.”

FAQ

Can AI-generated nude or bikini images of a real person be illegal?

Yes. Regulators have confirmed that AI sexual deepfakes are illegal, and platforms have duties to reduce the risk of users encountering this kind of abuse.

Is it still harmful if the body in the image is fake?

Yes. If the image uses a real person’s likeness without consent, the violation is real. Samantha Smith said it “felt as violating as if someone had actually posted a nude or a bikini picture of me.”

What are nudification tools?

Nudification tools are AI systems or features used to make people appear undressed or sexualised. The UK Home Office is legislating to ban such tools, with possible prison sentences and fines for suppliers.

What should platforms do to stop this?

Platforms should block prompts involving sexualised depictions of real people, enforce their own policies, remove abusive content quickly, preserve evidence, and punish repeat offenders. As Clare McGlynn said, “X or Grok could prevent these forms of abuse if they wanted to.”