Despite new restrictions announced by X, testing indicates Elon Musk’s AI chatbot Grok can still generate sexualised images without consent, raising renewed concerns about safeguards, accountability and the broader risks of generative AI tools.
Elon Musk’s artificial intelligence chatbot Grok is once again under criticism after fresh findings suggested it continues to generate sexualised images of individuals without their consent, even when explicitly warned against doing so. The issue has resurfaced weeks after X, the social media platform owned by Musk, announced tighter controls on Grok’s image-generation capabilities.
The renewed concerns follow global outrage over Grok’s earlier production of nonconsensual sexualised images, including depictions involving women and minors. In response, X had said it was restricting the chatbot’s ability to create such content, particularly in public-facing posts, and introducing further limitations in regions where such material may be illegal.
Curbs fail to fully prevent harmful outputs
Recent testing, however, indicates that while Grok’s public output on X appears to have been reduced, the chatbot can still generate problematic imagery when prompted directly. The findings suggest that the system does not consistently block requests involving nonconsensual or humiliating content, even when users clearly state that the subjects involved do not agree to the creation or sharing of such images.
The testing involved a range of scenarios designed to assess whether Grok would refuse to comply once issues of consent, vulnerability or potential harm were raised. In several instances, the chatbot reportedly continued to generate altered images, despite being informed that the subjects would feel embarrassed, distressed or violated by the output.
These results have intensified questions about whether current safeguards are adequate to prevent misuse, particularly as generative AI tools become more accessible and capable of manipulating images with increasing realism.
Broader concerns around AI accountability
The controversy has also reignited debate over the responsibility of AI developers and platform owners to ensure robust protections against abuse. While several major AI companies have publicly stated that they prohibit the creation of nonconsensual intimate imagery and have introduced technical and policy-based safeguards, enforcement remains uneven across platforms.
Critics argue that consent-based protections must be embedded more deeply into AI systems, rather than relying on partial restrictions or reactive measures after public backlash. Privacy advocates warn that failures in this area risk normalising digital harassment and could cause lasting psychological harm to victims.
As regulators worldwide move toward stricter oversight of artificial intelligence, Grok’s continued ability to generate nonconsensual sexualised content may add pressure on technology companies to strengthen guardrails and demonstrate greater accountability in how powerful generative tools are deployed.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



