UPDATED: SAN FRANCISCO, May 6, 2026, 04:56 PDT
Nearly one in two young Europeans have used AI chatbots to discuss intimate or personal matters, a new survey showed, pulling ChatGPT and rival systems into a role closer to confidant than search tool. Ludwig Franke Föyen, a psychologist and digital health researcher at Stockholm’s Karolinska Institutet, told Reuters that AI can “offer information and support,” but should not replace human relationships or professional care. Reuters
That is why the question has moved from product reviews to public health. OpenAI said in February that more than 900 million people use ChatGPT each week, and Reuters reported in January that the company had launched ChatGPT Health, a tab for health questions, medical records and wellness-app data, after saying more than 230 million people ask health and wellness questions in ChatGPT weekly.
The legal pressure is also fresh. Pennsylvania sued Character Technologies, the company behind Character.AI, alleging its chatbots held themselves out as doctors; Governor Josh Shapiro said users deserve to know “who — or what” they are dealing with online, especially on health, while Carnegie Mellon University AI-ethics scholar Derek Leben said the liability question is “exactly the question” courts are now wrestling with. AP News
OpenAI has tried to draw a line between support and treatment. In October, it said it updated ChatGPT’s default model to better recognize distress, de-escalate sensitive conversations, expand crisis-hotline access, route some chats to safer models and add reminders to take breaks during long sessions. The company said its latest model returned responses that did not fully meet its desired mental-health behavior 65% to 80% less often across a range of domains, though it cautioned that its tests used hard cases and were not representative of average use.
The tension is that chatbots can feel useful before they are clinically reliable. Stanford researcher Nick Haber, describing large language models — software trained on vast text sets to generate replies — said some people see “real benefits,” but his team found “significant risks” in systems used as companions, confidants and therapists. Stanford HAI
Young users are a harder problem. Stanford Medicine psychiatrist Nina Vasan said AI companions can “mimic emotional intimacy” and feel like friends, while still lacking a human sense of when to challenge, stop or send someone to help. That is especially exposed in adolescence, when impulse control and emotional regulation are still developing. Stanford News
Independent testing has given regulators more ammunition. A JMIR Mental Health study of five direct-to-consumer generative AI psychotherapy chatbots used by youth found strong marks for access and conversational ability but poor scores on therapeutic approach and risk monitoring; the authors concluded the tools were unsafe for millions of young users without reforms.
But the downside case is clear: if a chatbot is over-warm, too agreeable or blind to crisis signals, it may delay real care, deepen isolation or mislead a vulnerable user into believing a machine is a clinician. APA Services said it asked the U.S. Consumer Product Safety Commission to investigate generative AI chatbots as consumer products, citing risks including misrepresentation as licensed professionals, erosion of privacy and inadequate warnings.
The competitive context is changing with the risk profile. Reuters reported that ThroughLine, a New Zealand crisis-support contractor used by OpenAI, Anthropic and Google, routes users flagged for risks such as self-harm, domestic violence or eating disorders to a network of human-run services; its founder Elliot Taylor said the firm wants “to do a better job of covering” wider harms. Reuters
For OpenAI, the stakes are now wider than one product category. ChatGPT can answer health questions, prepare users for doctor appointments and interpret test results, Reuters reported, but the same always-on design that makes it easy to use also puts it in the path of loneliness, anxiety, self-harm talk and delusional thinking.
The market outcome may depend less on whether ChatGPT sounds empathetic than on whether OpenAI and its peers can prove the systems move distressed users toward people, not away from them. Courts, regulators and clinicians are likely to test that claim message by message.