Elon Musk’s artificial intelligence chatbot Grok, developed by his company xAI and integrated into the social media platform X (formerly Twitter), is facing mounting global regulatory scrutiny after it was found to produce sexualised and non‑consensual AI‑generated imagery, including deepfake images of women and minors. The controversy has triggered investigations, legal demands and public backlash across multiple regions.
On Tuesday, Ireland’s Data Protection Commission (DPC) acting as the lead regulator for X’s European Union operations announced it had opened a formal investigation into Grok under the EU’s General Data Protection Regulation (GDPR). The probe will examine whether X complied with its legal obligations in how Grok processes personal data and whether safeguards were sufficient to prevent the generation and spread of harmful sexualised content, including images involving children.
The Irish inquiry adds to growing pressure from European regulators, including the European Commission and Britain’s privacy watchdog, which are also scrutinising whether Grok’s behaviour violated regional rules on illegal and harmful online content. French authorities have already conducted related investigative actions against X in connection with deepfake content.
Grok first drew widespread criticism late last year after media and researchers reported that users could prompt the chatbot to edit or generate highly suggestive images of women and girls, often removing clothing or placing them in sexually provocative contexts without consent. Those findings sparked outrage from rights groups and lawmakers who said the tool was being misused to create non‑consensual sexualised deepfakes at scale.
Despite efforts by X to curb some features including limiting certain image‑generation capabilities investigations by Reuters and others found that Grok still generated inappropriate imagery in response to prompts. Meanwhile, the chatbot has seen a rapid rise in usage. A Reuters report earlier this week showed that Grok’s U.S. market share climbed significantly, even as concerns escalated over its content generation.
Regulators outside Europe have also taken action. In the United States, the California Attorney General sent a cease‑and‑desist letter to xAI calling for an end to the production and distribution of sexualised deepfake imagery by Grok, saying the volume and nature of such material may be illegal. Other countries, including Indonesia, Malaysia, India and the Philippines, have issued warnings or taken steps to probe or restrict access to the AI tool amid concerns over its misuse.
The controversy has prompted calls for tighter policy and legal frameworks governing generative AI systems and content moderation on social platforms. Critics argue that tools like Grok illustrate how quickly safeguards can fall short and how harmful imagery especially involving minors can spread, leading to reputational damage, privacy violations and serious social harm.
X and xAI have said they are working to improve safeguards and remove harmful content, but regulators and advocacy groups have said current measures are insufficient to prevent deepfake abuse. The unfolding investigations and potential legal actions reflect a broader global push to hold AI developers and platform operators accountable for the ethical deployment and oversight of generative technologies.
Leave a comment