- Grok’s Imagine model, run by Elon Musk’s xAI, has been used to generate graphic sexual images and videos, some reportedly depicting apparent minors.
- A WIRED review of archived Imagine links found highly explicit, violent, and photorealistic sexual content hosted on Grok’s site.
- Researchers say a portion of the material may qualify as AI-generated child sexual abuse content; regulators and lawmakers are being alerted.
H2: What WIRED found on Grok’s site
A WIRED review of roughly 1,200 Grok Imagine URLs archived and indexed online uncovered numerous videos and images that are far more explicit than the AI-generated material spreading on X. Unlike Grok’s public output on X, the Grok app and website host a separate Imagine model that supports advanced video generation.
The archived files include photorealistic videos of explicit sexual acts, some depicting violence and severe injury, as well as content that researchers say appears to sexualize young-looking characters. One researcher estimated about 800 of the archived links contained images or videos, and said “most of the time it’s manga and hentai explicit content and [other] photorealistic ones.”
H3: Claims of AI-generated CSAM and researcher responses
Paul Bouchaud, lead researcher at AI Forensics, told WIRED that nearly 10 percent of the reviewed content appeared related to child sexual abuse material (CSAM). Bouchaud and others reported around 70 Grok URLs to European regulators. In many jurisdictions, AI-generated CSAM — including drawings or animations — is treated as illegal.
xAI, the company behind Grok, has policies prohibiting the “sexualization or exploitation of children,” and says it uses processes to detect and limit CSAM. WIRED reports xAI did not respond to requests for comment about the videos. Apple, Google and Netflix also did not comment when approached.
H3: Elon Musk’s public response
Elon Musk posted on X that users creating illegal content with Grok would “suffer the same consequences as if they upload illegal content.” (Link: https://x.com/elonmusk/status/2007475612949102943)
H2: How users are getting around moderation
Forum threads and subreddit discussions documented by WIRED describe techniques users share to circumvent Grok’s moderation. Some users report success generating explicit images and videos by tweaking prompts, using anime-style or “movie poster” overlays, or creating content that disguises intent.
Experts worry that even if a minority of outputs are unlawful, wide availability normalizes extreme sexualized content. Clare McGlynn, a Durham University law professor, said unrestricted AI porn risks normalizing sexual violence and can have lasting harms for victims and society.
H3: Next steps — investigations and policy questions
French prosecutors reportedly received complaints related to Grok’s content. Regulators face technical and legal questions about how to police AI-generated sexual material across platforms and app stores. Researchers continue to archive and report suspect links to authorities.
H2: What to watch
- Whether xAI tightens moderation or removes Imagine video features.
- Responses from Apple and Google about allowing Grok on their app stores.
- Official investigations from national prosecutors or digital safety regulators.
This story is evolving. WIRED’s reporting forms the basis of these findings, and researchers say the archived links represent a snapshot, not the full scale of Grok’s output.
Image Referance: https://www.wired.com/story/grok-is-generating-sexual-content-far-more-graphic-than-whats-on-x/