ChatGPT is facing new scrutiny after its CEO, Sam Altman, confirmed that conversations with the tool are not legally protected and could be submitted as evidence in court.
Altman made the admission during an appearance on This Past Weekend, a podcast hosted by comedian Theo Von, where he raised concerns about how deeply personal many users, especially young people, have become with the AI tool.
“If you go talk to ChatGPT about the most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that,” he said. “And that’s a real problem.”
The revelation has sparked a growing debate over privacy, data rights, and the ethical limits of artificial intelligence in the absence of formal regulatory protections.
What This Means
During the podcast, Altman described how users often treat ChatGPT like a digital confidant. From sharing relationship problems to seeking mental health guidance, the nature of these conversations is often intimate and vulnerable.
“People talk about the most personal shit in their lives to ChatGPT,” Altman said. “Young people especially use it as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’”
However, Altman clarified that despite the emotional weight users place on ChatGPT, these interactions are not protected under any legal privilege.
“If you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it,” he said. “There’s doctor-patient confidentiality, there’s legal confidentiality. And we haven’t figured that out yet for when you talk to ChatGPT.”
Experts Raise Concerns
Altman’s remarks have triggered scrutiny from experts, especially within the AI and digital rights community, where conversations around ethical technology deployment are gaining momentum.
Many argue that the lack of legal confidentiality in AI interactions poses a serious threat to user trust and privacy.
Ifeanyi Iheanacho, an artificial intelligence specialist in Lagos, described the revelation as a “red flag moment” for the industry.
“When users share personal information with AI platforms like ChatGPT, they do so under the assumption of privacy. Without legal protections, this risks undermining public trust in AI tools altogether.”
Similarly, Basirat Adeyemi, a legal expert, warns that personal disclosures made to AI could be accessed in everything from divorce proceedings to criminal trials and internal corporate investigations.
“We’re talking about a tool used daily by students, professionals, even vulnerable people in distress. If these chats can be subpoenaed, then AI providers must be held to higher standards of transparency and user protection,” she said.
Why It Matters
Altman’s comments have sparked a wave of concern among users. According to experts, his disclosure cuts to the heart of digital trust and user safety. Without legal protections, users say they may be exposing themselves to unintended consequences.
Personal disclosures could be accessed, and the legal admissibility of AI-generated transcripts, while still largely untested in court, presents a growing ethical dilemma.
Altman urged regulators and the tech community to develop a new framework that extends legal confidentiality to AI conversations, particularly those of a sensitive nature.
“I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever, and no one had to think about that even a year ago,” he said.
Understanding the Legal Gap in AI Privacy
According to industry experts, the legal concept Altman refers to is a cornerstone of many professional-client relationships.
Doctor-patient confidentiality, attorney-client privilege, and therapist-client protections ensure that sensitive information cannot be disclosed or used against individuals in court.
By contrast, conversations with AI tools like ChatGPT fall outside these frameworks. Experts say this is because AI is not a licensed professional, and because no statutory protections currently apply.
The privacy gap affects millions of users. As of July 2025, ChatGPT has close to 800 million weekly active users. It is used by around 122.58 million people every day. It also handles over 1 billion queries every single day, and OpenAI is expected to reach $11 billion in revenue by the end of 2025.
Altman’s comments reflect the broader vacuum in AI regulation, where the issue of user confidentiality remains alarmingly under-addressed. Legal experts argue that unless swift action is taken, the gap will grow wider.
Talking Points
It is striking that Sam Altman, the CEO of OpenAI, is the one publicly raising the alarm about ChatGPT’s lack of legal confidentiality, a move that signals both ethical concern and the urgency of reform in AI governance.
His remarks underscore a growing gap between how AI is being used and how it is legally understood. ChatGPT is not just a tool; it has become a digital confidant for millions, yet without any formal protections.
At Techparley, we see how users are placing emotional and personal trust in these platforms, asking life questions, sharing mental health struggles, and revealing sensitive details.
But what many don’t realise is that AI does not operate within any legal privilege, leaving them exposed to future subpoenas or court orders.
Altman’s call for AI-user confidentiality is timely. As AI systems continue to blend into healthcare, education, and legal advisory roles, the need for new regulatory frameworks is no longer optional, it is essential for user safety and trust.
There’s a critical opportunity here: governments, AI companies, and civil society must collaborate to define new protections that treat AI conversations with the same care as human-to-human ones. Until then, users will continue to interact in good faith with platforms that legally owe them none.