
Warning: mentions of child sexual abuse material.
Three years after Musk declared removing child exploitation on X ‘priority #1’, eSafety says CSEM is more accessible on his platform than any other mainstream service and has launched two investigations into the platform.
Child sexual exploitation material is more prominent on Elon Musk’s X than any other mainstream service, according to Australia’s online safety regulator.
The comments, made in response to the xAI chatbot Grok mass undressing controversy, also state that the efforts of Musk’s company to address the problem have been “inadequate and ineffective”.
The assessment is contained in documents obtained under freedom of information and tabled in the Senate, which reveal the regulator is running two separate investigations: one into X over the hosting of child sexual exploitation material (CSEM), and a second into xAI, Musk’s artificial intelligence company, over its Grok chatbot being used to generate it.
In a letter to X dated January 8, 2026, eSafety’s general manager of regulatory operations Heidi Snell wrote that the regulator had contacted the company five times since August 2024 about the availability of child sexual exploitation material on the platform.
“The availability of CSEM continues to appear particularly systemic on X,” Snell wrote.
“eSafety has not identified CSEM to be as readily accessible on any other mainstream service.”
The letter noted Musk’s own November 2022 statement that “removing child exploitation is priority #1”. More than three years later, eSafety said the problem had not been fixed.
eSafety also found that ordinary hashtags were being co-opted to advertise the material, meaning people using X normally were likely being exposed to it.
An internal eSafety briefing prepared for Communications Minister Anika Wells detailed what external organisations had found about Grok.
Grok responsible
It cited the UK-based Internet Watch Foundation’s investigations that child sexual exploitation material was likely being generated by the xAI chatbot. AI Forensics, an algorithmic auditing firm, identified imagery of children as young as five in bikinis or “transparent” clothing, as well as content depicting Nazi symbols and ISIS terrorist material, including images of executions and propaganda.
One account on X had used Grok to generate significant amounts of material targeting a redacted individual from eSafety with graphic abuse “depicting violence, including stabbing specific staff, beheadings, other depictions of murder and other extreme content”. The account has since been removed.
X released a statement on January 4, 2026, about removing material and suspending accounts. eSafety was unimpressed: the statement “did not refer to any changes X intended to implement on its service to prevent future misuse of Grok on X from taking place”.
The briefing noted that while Grok is technically separately regulated from xAI, X remains responsible for how the feature operates on its platform. This means its use would be regulated by the Online Safety Act’s industry codes for social media platforms that require companies to prevent them from being used to create or distribute child sexual exploitation material.
X challenges safety standards
X is also challenging the validity of the online safety standards themselves, with a Federal Court hearing set for May 2026.
The eSafety briefing flagged a gap in the regulatory framework: current industry codes and standards cover child exploitation material but do not impose systemic obligations around AI-generated image-based abuse of adults.
Additional codes targeting children’s access to age-restricted material came into force on March 9, 2026, but eSafety noted these “will not prevent image-based abuse of Australian adults from taking place using AI and being posted online”.
An eSafety spokesperson said that the agency is “continuing to assess and investigate X’s compliance with its obligations under applicable industry codes and standards in relation to child sexual exploitation material. This includes ongoing engagement with X regarding its obligations.”
X did not immediately respond to a request for comment.
- Survivors of abuse can find support by calling Bravehearts at 1800 272 831 or the Blue Knot Foundation at 1300 657 380. The Kids Helpline is 1800 55 1800. In an emergency, call 000.

