Julie Yukari, a musician based in Rio de Janeiro, posted a photo taken by her fiancé to the social media site X just before midnight on New Year’s Eve showing her in a red dress snuggling in bed with her black cat, Nori.
The next day, somewhere among the hundreds of likes attached to the picture, she saw notifications that users were asking Grok, X’s built-in artificial intelligence chatbot, to digitally strip her down to a bikini.
The 31-year-old did not think much of it, saying she figured there was no way the bot would comply with such requests.
She was wrong. Soon, Grok-generated pictures of her, nearly naked, were circulating across the Elon Musk-owned platform.
“I was naive,” Yukari said.
Yukari’s experience is being repeated across X, analysis has found. Reuters has also identified several cases where Grok created sexualised images of children. X did not respond to a message seeking comment on Reuters’ findings. In an earlier statement to the news agency about reports that sexualised images of children were circulating on the platform, X’s owner xAI said: “Legacy Media Lies”.
International outcry
The flood of nearly nude images of real people has rung alarm bells internationally.
Ministers in France have reported X to prosecutors and regulators over the disturbing images, saying in a statement the “sexual and sexist” content was “manifestly illegal”. India’s IT ministry said in a letter to X’s local unit that the platform failed to prevent Grok’s misuse by generating and circulating obscene and sexually explicit content.
The US Federal Communications Commission did not respond to requests for comment. The Federal Trade Commission declined to comment.
Grok’s mass digital undressing spree appears to have kicked off over the past couple of days, according to clothes-removal requests completed and posted by Grok and complaints from female users reviewed by Reuters. Musk appeared to poke fun at the controversy, posting laugh-cry emojis in response to AI edits of famous people – including himself – in bikinis.
When one X user said their social media feed resembled a bar packed with bikini-clad women, Musk replied, in part, with another laugh-cry emoji.
Reuters could not determine the full scale of the surge.
A review of public requests sent to Grok over a single 10-minute-long period at midday US Eastern Time on Friday tallied 102 attempts by X users to use Grok to digitally edit photographs of people so that they would appear to be wearing bikinis. The majority of those targeted were young women. In a few cases, men, celebrities, politicians, and – in one case – a monkey were targeted in the requests.
“Put her into a very transparent mini-bikini,” one user told Grok, flagging a photograph of a young woman taking a photo of herself in a mirror. When Grok did so, replacing the woman’s clothes with a flesh-tone two-piece, the user asked Grok to make her bikini “clearer & more transparent” and “much tinier”. Grok did not appear to respond to the second request.
Grok fully complied with such requests in at least 21 cases, Reuters found, generating images of women in dental-floss-style or translucent bikinis and, in at least one case, covering a woman in oil. In seven more cases, Grok partially complied.
Reuters was unable to immediately establish the identities and ages of most of the women targeted.
AI-powered programs that digitally undress women – sometimes called ‘nudifiers’ – have been around for years, but until now they were largely confined to the darker corners of the internet, such as niche websites or Telegram channels, and typically required a certain level of effort or payment.
Three experts who have followed the development of X’s policies around AI-generated explicit content told Reuters that the company had ignored warnings from civil society and child safety groups – including a letter sent last year warning that xAI was only one small step away from unleashing “a torrent of obviously nonconsensual deepfakes”.
Tyler Johnston, the executive director of The Midas Project, an AI watchdog group that was among the letter’s signatories, said: “In August, we warned that xAI’s image generation was essentially a nudification tool waiting to be weaponised. That’s basically what’s played out.”
Dani Pinter, the chief legal officer and director of the Law Centre for the National Centre on Sexual Exploitation, said X failed to pull abusive images from its AI training material and should have banned users requesting illegal content.
“This was an entirely predictable and avoidable atrocity,” Pinter added.

