For the past two weeks, the social media platform X has been overwhelmed by AI-generated non-consensual nude images produced using the Grok chatbot, intensifying global concern over the misuse of generative artificial intelligence. The images have targeted a wide range of women, from public figures such as models, actresses and journalists to private individuals, crime victims and political leaders, highlighting how quickly such tools can be weaponised at scale.
According to Copyleaks, early research published on December 31 suggested that manipulated images were appearing at a rate of roughly one per minute. Subsequent monitoring revealed a far sharper escalation, with testing conducted between January 5 and 6 indicating that approximately 6,700 images were being posted every hour over a full day. The findings point to rapid viral distribution enabled by minimal technical restrictions and high user engagement.
READ – Push for Nudity-blocking Software to Protect Children
The surge has exposed gaps in existing technology regulation, particularly where AI systems are released without robust safeguards. While criticism has come from political leaders, civil society groups and affected individuals, regulators face limited immediate tools to intervene, especially when platforms operate across jurisdictions. The episode has become a case study in how innovation is outpacing enforcement frameworks designed for earlier generations of digital platforms.
The most significant regulatory pressure has emerged from the European Union. As reported by CNN, the European Commission instructed xAI to preserve all internal documentation linked to Grok, a procedural step often taken before launching formal investigations. The move follows reporting that internal decisions may have deprioritised content restrictions during the system’s release, increasing scrutiny under the bloc’s Digital Services Act.
X has not publicly confirmed whether technical changes have been made to Grok’s image generation capabilities, although access to some public-facing features linked to the tool has been restricted. The company has reiterated that generating illegal content, including child sexual abuse material, violates its policies and carries enforcement consequences similar to those applied to direct uploads.
Regulators elsewhere have signalled growing concern. In the United Kingdom, Ofcom has initiated engagement with xAI to assess potential breaches of online safety obligations, while political leaders have publicly backed swift regulatory action. In Australia, complaints linked to Grok have risen sharply since late 2025, prompting the eSafety Commissioner to indicate that formal investigative powers remain under consideration, as reported by TechCrunch.
India represents the largest market where concrete enforcement risk is emerging. Following a formal complaint from a lawmaker, the Ministry of Electronics and Information Technology issued an order requiring X to explain remedial steps taken to curb the abuse. Failure to satisfy regulators could result in the loss of safe harbour protections, a development that would significantly increase the platform’s legal exposure in one of its fastest-growing user markets.
