
Media
/
Aug 2025OpenAI Faces Lawsuit After ChatGPT’s Role in Teen Suicide, Admits Safeguards May Fail in Long Conversations
Research Manager, Harleen Kaur was quoted in a Medianama article looking at the recent lawsuit against OpenAI after ChatGPT's role in teen suicide and the company admitting that safeguards may fail in long conversations.
A report highlighted that ChatGPT uses a sycophantic tone that can induce emotional dependence or exploit vulnerability, and researchers noted that users can easily bypass its guardrails with simple phrasing.
Harleen echoed this sentiment around current mitigation methods on chatbots.
“Disclaimers and guardrails are the two main mitigation techniques used by platforms deploying chatbots. Both leave gaps. Disclaimers are often ignored, and guardrails are usually designed around known or anticipated risks. But chatbot-human interactions create many unknowns, which cannot easily be addressed through pre-set guardrails”.
Further, she spoke of the gaps in India’s regulatory framework:
In an attempt to promote innovation, many jurisdictions, including India, seem to have kept a ‘light touch’ approach towards regulating Generative AI tools, including chatbots. However, this light-touch model leaves gaps, especially in cases where users might turn to these chatbots in crisis.
For instance, while some AI-based solutions may qualify as a medical device and require registration with a regulator, many others operate in a regulatory vacuum.
Healthcare regulators enforce strict rules, license practitioners, and implement protocols to protect patients.
By contrast, AI tools can bypass this, especially in mental health, where chatbots can step in without a qualified human to evaluate and respond to the very sensitive nature of conversations. No concrete step seems to have been taken in India for this issue.
From a regulatory perspective, intermediary liability remains uncertain. I don’t know of a case in India where intermediary liability tests have been applied to platforms hosting chatbots. Safe-harbour protections under the IT Act may not be appropriate since generative platforms are not passive carriers, but actively shape the tool that interacts with and generates responses.
Finally, while consumer protection and product liability laws exist, practical enforcement remains weak. Harms are difficult to trace back to chatbot use, and state capacity to regulate such harms is low in India. As a result, India’s framework lags behind the risks posed by generative AI in sensitive areas like mental health.