back to top
14.9 C
Islamabad
Thursday, January 15, 2026

Grok AI’s Shocking Image Feature Triggers Outrage Worldwide

Complaints have swept across the internet after Grok, an AI tool developed by Elon Musk’s company xAI and integrated into the social platform X, introduced an “edit image” button that lets users manipulate online photos with commands like “put her in a bikini” or “remove her clothes.”

This wave of digital undressing set off alarm bells among tech activists, especially as concerns over AI-powered “nudify” apps have been mounting. Countries including France, India, and Malaysia quickly launched investigations or called for immediate action.

On Monday, the European Commission – which oversees digital policy for the EU – said it was taking the issue very seriously. “Grok is now offering a ‘spicy mode’ that’s showing explicit sexual content, even with images that look like children. This isn’t spicy – it’s illegal and appalling,” said EU digital spokesman Thomas Regnier. “There’s no place for this in Europe.”

The UK’s media regulator Ofcom said it had urgently contacted both X and xAI to find out what steps they were taking to protect users in Britain, warning that it may open a formal investigation if necessary.

Meanwhile, Malaysian lawyer Azira Aziz was shocked when a Grok user, apparently from the Philippines, was able to change her “profile picture to a bikini.” Aziz told AFP, “While light-hearted AI uses like adding sunglasses to public figures are okay, turning AI against women and children in non-consensual ways crosses a dangerous line.” She urged people to report abuse to X and Malaysian authorities.

Users on X also called on Musk to do more, highlighting disturbing reports of Grok being asked to put bikinis on photos of children. Ashley St. Clair, mother of one of Musk’s children, posted, “Grok is undressing photos of me as a child. This is objectively horrifying and illegal.”

xAI, when contacted for comment, sent only a curt automated response: “Legacy Media Lies.”

Amid the growing backlash, Grok announced on Friday that it was rushing to fix the problem. “We’ve identified lapses in safeguards and are urgently fixing them,” Grok said via X, stressing that “CSAM (Child Sexual Abuse Material) is illegal and prohibited.”

Separately, Grok apologized for creating and sharing “an AI image of two young girls (estimated ages 12–16) in sexualized attire based on a user’s prompt.”

The controversy escalated after Paris prosecutors broadened an

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles