IWF Discovers Child Exploitation Imagery Linked to Grok AI Tool

The Internet Watch Foundation (IWF) has reported the discovery of “criminal imagery” involving girls aged between 11 and 13, which “appears to have been created” using the AI tool known as Grok. This tool, owned by Elon Musk‘s company, xAI, is accessible through its website, app, and the social media platform X. The IWF found “sexualised and topless imagery of girls” on a “dark web forum,” where users claimed to have utilized Grok for this purpose.

Ngaire Alexander, a spokesperson for the IWF, expressed deep concern over the implications of such tools. He stated that they risk “bringing sexual AI imagery of children into the mainstream.” Under UK law, this type of material would be classified as Category C, which represents the lowest severity of criminal content. However, Alexander noted that the individual who uploaded the images subsequently used a different AI tool—one not developed by xAI—to create a Category A image, which is the most serious classification.

“We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material (CSAM),” Alexander emphasized. The IWF, which focuses on removing CSAM from the internet, operates a hotline for reporting suspected material and employs analysts to evaluate the legality and severity of such content. The troubling imagery was located on the dark web and has not been detected on the platform X.

The IWF previously alerted Ofcom, the UK’s communications regulator, regarding reports that Grok can be used to create “sexualised images of children” and to undress women. Instances have been observed on X where users have asked the chatbot to modify real images, resulting in women appearing in bikinis without their consent, as well as placing them in sexual situations. While the IWF has received reports of such content on X, these images have not yet met the legal definition of CSAM.

In response to these findings, X stated, “We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” The platform also emphasized that anyone using or prompting Grok to produce illegal content would face the same consequences as if they were uploading illegal material.

The ongoing discussion around AI and its potential misuse highlights a growing concern among child protection advocates. As technology continues to advance, the challenges of safeguarding vulnerable populations from exploitation remain a critical issue. The IWF’s efforts to combat such threats reflect a broader commitment to ensuring the safety of children in an increasingly digital world.