Mark Zuckerberg Approved Teen Access to AI Companions Despite Sexual Risk Warnings, Court Filings Reveal

Quadri Adejumo
By
Quadri Adejumo
Senior Journalist and Analyst
Quadri Adejumo is a senior journalist and analyst at Techparley, where he leads coverage on innovation, startups, artificial intelligence, digital transformation, and policy developments shaping Africa’s...
- Senior Journalist and Analyst
8 Min Read

Meta Chief Executive Mark Zuckerberg approved the rollout of artificial intelligence chatbot “companions” to minors despite repeated internal warnings that the tools could facilitate sexualised interactions, according to newly unsealed court filings in a lawsuit brought by the New Mexico Attorney General.

The documents, made public ahead of a trial scheduled for next month, form part of a legal action accusing Meta of failing to protect children on Facebook and Instagram from harmful sexual content generated by its AI systems.

The lawsuit, filed by New Mexico Attorney General Raul Torrez, alleges that Meta “failed to stem the tide of damaging sexual material and sexual propositions delivered to children” through its AI chatbot products, which were launched in early 2024.

Internal warnings over sexualised AI interactions

According to the filings, Meta’s internal safety and integrity teams repeatedly raised concerns that the company’s AI chatbots, designed to function as “companions” were capable of engaging users in romantic and sexual conversations, including scenarios involving minors.

Emails and internal messages obtained through legal discovery suggest that staff explicitly warned that allowing adults to create or interact with underage-themed AI companions posed serious ethical and legal risks.

“I don’t believe that creating and marketing a product that creates U18 romantic AIs for adults is advisable or defensible,” wrote Ravi Sinha, Meta’s head of child safety policy, in a January 2024 message included in the court documents.

In a reply cited in the filing, Meta’s global head of safety, Antigone Davis, agreed that such interactions would “sexualise minors” and argued that adults should be blocked from creating or engaging with under-18 romantic AI personas.

Allegations of executive override

While the documents do not include messages directly authored by Zuckerberg, the New Mexico Attorney General’s Office argues that they demonstrate executive-level decisions that overrode staff recommendations.

According to a February 2024 internal message, a Meta employee relayed that Zuckerberg supported blocking explicitly sexual conversations for younger teenagers and preventing adults from engaging in romantic interactions with underage AI personas. However, other records suggest the CEO favoured a less restrictive approach overall.

A meeting summary dated 20 February 2024 states that Zuckerberg wanted Meta’s policy approach framed around “choice and non-censorship” and argued that the company should be “less restrictive than proposed,” including allowing adults to engage in “racier conversations on topics like sex”.

Later internal messages from March 2024 indicate that Zuckerberg rejected proposals to introduce parental controls that would allow guardians to disable generative AI features for minors. One employee wrote that staff had “pushed hard for parental controls to turn GenAI off”, but leadership “pushed back stating Mark decision”.

The same exchange referenced ongoing work on “Romance AI chatbots” that would be accessible to users under 18.

Meta disputes allegations

Meta has strongly contested the state’s interpretation of the documents. Andy Stone, a spokesperson for the company, said the filing relied on selective excerpts that misrepresented internal discussions.

“This is yet another example of the New Mexico Attorney General cherry-picking documents to paint a flawed and inaccurate picture,” Stone said.

He added that the records show Zuckerberg directing that explicitly sexual AI interactions should not be available to younger users and that adults should not be able to create underage romantic AI characters.

The court documents also include emails from Nick Clegg, who served as Meta’s President of Global Affairs until early 2025, expressing unease about the direction of the company’s AI companion products.

In one message, Clegg warned that sexualised interactions risked becoming the dominant use case for teenagers, raising the prospect of public backlash.

Regulatory scrutiny and public backlash

Meta’s AI chatbot policies have faced increasing scrutiny from regulators, lawmakers, and journalists. In April 2025, an investigation by The Wall Street Journal reported that Meta’s chatbots included sexualised underage characters and allowed all-ages sexual roleplay, including explicit descriptions involving minors.

Separately, Reuters reported in August that Meta’s official chatbot guidelines stated it was “acceptable to engage a child in conversations that are romantic or sensual”. Meta later said the document was erroneous and announced changes to its policies.

The revelations triggered sharp criticism from members of the US Congress, child safety advocates, and international regulators.

Last week, Meta said it had removed teen access to AI companions entirely while it works on a redesigned version of the product with stronger safeguards.

The New Mexico case, however, raises broader questions about accountability in the development of consumer AI products, particularly when internal safety warnings clash with commercial and product strategy priorities.

As the trial approaches, the court will examine whether Meta’s handling of AI chatbot deployment breached child protection laws and whether executive decisions placed minors at risk in the pursuit of innovation and engagement.

Talking Points

The court filings against Meta highlight a growing fault line in consumer AI development: the tension between rapid product rollout and child safety. Allowing AI companions with companionship and romantic capabilities into youth-facing platforms raises serious governance and ethical questions.

What stands out is not just the existence of risky AI behaviour, but the allegation that internal safety warnings were raised early and repeatedly. This suggests that the challenge is no longer a lack of foresight, but how much weight safety teams carry when their recommendations collide with product strategy and growth priorities.

At Techparley, we see this case as a critical test of accountability in the AI era. As generative AI becomes embedded in social platforms, decisions made behind closed doors increasingly shape real-world harm, particularly for minors who lack agency and informed consent.

The controversy also underscores how “choice” and “non-censorship” narratives can break down when applied to child-facing technologies. AI systems do not merely reflect user intent; they actively shape behaviour, norms, and exposure, making guardrails essential, not optional.

However, Meta’s decision to eventually remove teen access to AI companions shows that regulatory pressure and public scrutiny still matter. It signals that post-launch corrections are possible, even if they arrive late.

As AI companions and conversational agents proliferate across social platforms, this case sets an important precedent. The long-term question is whether safety-by-design becomes standard practice, or whether platforms will continue to rely on reactive fixes after harm has already occurred.

——————-

Bookmark Techparley.com for the most insightful technology news from the African continent.

Follow us on Twitter @Techparleynews, on Facebook at Techparley Africa, on LinkedIn at Techparley Africa, or on Instagram at Techparleynews.

Senior Journalist and Analyst
Follow:
Quadri Adejumo is a senior journalist and analyst at Techparley, where he leads coverage on innovation, startups, artificial intelligence, digital transformation, and policy developments shaping Africa’s tech ecosystem and beyond. With years of experience in investigative reporting, feature writing, critical insights, and editorial leadership, Quadri breaks down complex issues into clear, compelling narratives that resonate with diverse audiences, making him a trusted voice in the industry.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to Techparley Africa

Stay ahead of the curve. While millions of people still have to search the internet for the latest tech stories, industry insights and expert analysis; you can simply get them delivered to your inbox.


Please ignore this message if you have already subscribed.

×