The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
To your last point I fully agree!
For the first point: that’s how I understood you - what I failed to convey: adultsshould fall victim more in cases like this because parents can be a protective shield of a kind that grown-ups lag.
Children on their own stand easy less of a chance but are very rarely on their own.
And to be honest I think it doesn’t change result of requirements for action both in general but respectfully for language based bots, both from a legal as well as an educational point of view.