The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit.
Raine Lawsuit Filing
Oof yeah okay. If another human being had given this advice it would absolutely be a criminal act in most countries. I’m honestly shocked at how personable it tries to be.
Oh my God this is crazy… “Thanks for being real with me”, “hide it from others”, he even gives better reasons for the kid to kill himself than the ones the kid articulated himself and helps him make better knot
Holy fuck ChatGPT killed that kid!