The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • Spuddlesv2@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    23 hours ago

    They already had the kid in therapy. That suggests they were involved enough in his life to know he needed professional help. Other than completely removing his independence, effectively becoming his jailers, what else should they have done?

    • Scipitie@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      In the very first post on this thread I pointed out that I’m not talking about this specific case at all.

      • Spuddlesv2@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        14 hours ago

        Fair enough but in the post I replied to you did say you won’t blame the parents “here” in the slightest, which to me means “here in this specific case”.