Written By Shubham Arora
Edited By: Shubham Arora | Published By: Shubham Arora | Published: Jan 24, 2026, 03:13 PM (IST)
Grok Imagine
A recent report has raised serious questions around how generative AI tools are being used and monitored. According to the findings, Grok, an AI model developed by xAI, generated a very large number of images in a short span of time, with a portion of them flagged as sexualised. More worryingly, the report claims that thousands of those images involved children. Also Read: ChatGPT helped save a dog after vets gave just 5 percent chance of survival: Here’s what happened
The findings come from a study by the Center for Countering Digital Hate (CCDH). The group says Grok generated around 30 lakh images over an 11-day period. Based on its analysis, the organisation estimates that more than 23,000 of those images were sexualised and involved children. Also Read: The $10 trillion handshake: Elon Musk crashes Davos to join forces with the King of Wall Street!
CCDH says it analysed a random sample of images created between late December 2025 and early January 2026. Using that sample and publicly available data on overall image generation, the group arrived at broader estimates for the full period. Also Read: Google May Put AI Data Centres In Space, But Crowded Orbits Could Be A Problem
The report describes sexualised images as those showing people in explicit poses, revealing clothing, or sexual contexts. It claims Grok was producing such images at a high frequency during the period studied.
The most serious concern raised by the report relates to child safety. CCDH claims some of the images involving children were created by altering otherwise normal photos and turning them into sexualised content. According to the group, some of this material remained accessible online even after moderation efforts, sometimes through direct links.
The findings have once again drawn attention to how generative AI tools can be misused when safety measures are weak or unevenly enforced. Child safety experts have repeatedly warned that image-generation tools need tighter controls, especially when they are capable of producing realistic images of people.
After criticism began to grow, restrictions were reportedly introduced on some image-editing features linked to Grok. However, the report suggests that these limits were not applied uniformly across all versions or access points. As a result, the generation of problematic content was still possible in certain cases.
This has also brought up questions around who should be held responsible. It is not just about the AI company building the tool, but also the platforms that make it available to users. Even though most platforms already ban sexual content without consent and any material involving minors, the report suggests these rules are not always enforced properly.
The findings have led to fresh calls for tighter control over generative AI tools, especially those that can create realistic images. Advocacy groups say voluntary safety measures are clearly not working on their own and that clearer rules, along with accountability, are needed to prevent misuse at a larger scale.