The European Commission has opened a new formal investigation into social media platform X under the Digital Services Act (DSA), intensifying regulatory scrutiny over the spread of sexual deepfakes and other harmful content within the European Union.
In a statement published on its website, the Commission said the probe will examine whether X has adequately assessed and mitigated risks linked to the dissemination of illegal content, particularly manipulated and non-consensual sexually explicit images. The investigation also raises concerns that some of this content could amount to child sexual abuse material.
The move follows growing international alarm over the misuse of generative artificial intelligence tools, including X’s AI assistant, Grok. Authorities in countries such as the United Kingdom, Malaysia, and Indonesia have already launched separate investigations or regulatory actions related to the creation and spread of non-consensual sexual imagery.
What the Commission Is Investigating
According to the Commission, there are indications that systemic risks associated with sexual deepfakes on X have already materialised, exposing EU citizens to serious harm.
“In light of this, the Commission will further investigate whether X complies with its DSA obligations,” the statement said.
Specifically, regulators will assess whether X has:
- Diligently identified and mitigated systemic risks, including the dissemination of illegal content and harmful effects related to gender-based violence.
- Properly addressed serious negative impacts on users’ physical and mental wellbeing arising from the deployment of Grok’s functionalities on the platform.
- Conducted and submitted an ad hoc risk assessment report on Grok prior to its rollout, particularly where its deployment significantly altered X’s overall risk profile.
The Commission emphasised that such risk assessments are a core requirement under the DSA, especially for platforms classified as Very Large Online Platforms (VLOPs).
What You Should Know
Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, described sexual deepfakes as a grave violation of fundamental rights.
“Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens – including those of women and children – as collateral damage of its service,” Virkkunen said.
Her remarks underline the EU’s increasingly firm stance on platform accountability, particularly where emerging technologies amplify harm at scale.
The Commission warned that failure to meet its obligations could constitute breaches of Articles 34(1) and (2), 35(1), and 42(2) of the DSA.
These provisions require large platforms to identify systemic risks, implement mitigation measures, and ensure transparency around high-impact technological changes.
In parallel, the Commission has expanded a separate investigation launched in December 2023. That earlier probe is now examining whether X has adequately assessed and mitigated risks linked to its recommender systems, including the platform’s recent transition to a Grok-powered recommendation engine.
A History of Regulatory Pressure
The December 2023 proceedings marked the first formal enforcement action under the Digital Services Act. They assessed X’s compliance across several areas, including content moderation practices, risk management, deceptive design, advertising transparency, and access for independent researchers.
That investigation drew on X’s own risk assessment and transparency reports, as well as responses to formal information requests. It also examined the platform’s handling of content related to Hamas’ attacks against Israel, a key test case for crisis-related disinformation and illegal content.
X was designated a Very Large Online Platform on 25 April 2023, based on its estimated 112 million monthly users in the EU, placing it under the strictest tier of DSA obligations.
Previous Fine and Ongoing Concerns
Regulatory pressure intensified in December 2025, when the Commission fined X €120 million for multiple DSA violations. The penalties stemmed from deceptive design practices, weak advertising transparency, and restrictions on data access for independent researchers.
Regulators highlighted three principal failures:
- X’s paid blue checkmark system, which allowed users to purchase verification without robust identity checks, misleading the public and increasing exposure to scams.
- Persistent shortcomings in advertising transparency, including missing information on sponsors and targeting, as well as design barriers and delays that obstructed scrutiny.
- Limited access to public platform data for independent researchers, undermining efforts to study systemic risks such as misinformation and illegal content.
A Test Case for AI Governance in Europe
The latest investigation places X at the centre of a broader debate over AI governance, platform responsibility, and digital safety in Europe. As generative AI tools become more deeply embedded in social platforms, regulators are signalling that innovation must not come at the expense of fundamental rights.
For X, the outcome of the probe could carry significant legal and financial consequences and further define how far the EU is willing to go in enforcing the Digital Services Act against global technology companies.
As the Commission continues its investigation, the case is likely to serve as a bellwether for how Europe intends to regulate AI-powered platforms accused of amplifying harm at scale.
Talking Points
It is significant that the European Commission has opened a fresh investigation into X under the Digital Services Act, signalling a tougher regulatory stance on the misuse of generative AI tools to produce sexual deepfakes.
This probe reflects growing concern that AI-powered features like Grok are being deployed without sufficient safeguards, exposing users, particularly women and children to serious harm through non-consensual and manipulated sexual content.
At Techparley, we see this investigation as a critical test of whether major platforms are taking their risk assessment and mitigation obligations seriously, especially as AI systems become deeply embedded in content creation and recommendation pipelines.
The Commission’s focus on systemic risks, including gender-based violence and mental wellbeing, suggests that regulators are no longer treating sexual deepfakes as isolated incidents, but as structural failures in platform governance.
However, the case also highlights persistent gaps between innovation and accountability. Rapid deployment of AI features without robust pre-deployment risk assessments raises questions about whether growth and engagement are being prioritised over user safety.
As enforcement under the DSA intensifies, we see an opportunity for clearer global standards around AI governance, platform transparency, and harm prevention. The outcome of this investigation could shape how AI-powered social platforms operate not just in Europe, but worldwide.
——————-
Bookmark Techparley.com for the most insightful technology news from the African continent.
Follow us on Twitter @Techparleynews, on Facebook at Techparley Africa, on LinkedIn at Techparley Africa, or on Instagram at Techparleynews.

