The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • uss_entrepreneur@startrek.website
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    9
    ·
    edit-2
    22 hours ago

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.

    For fucks sake it helped him write a suicide note.

    • Aneb@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 hours ago

      Yeah my sister is 32 and needs the guardrails. She’s had two manic episodes in the past month, induced by a lot of external factors but AI tied the bow on mental breakdown often asking it to think for her and to critically think

    • ronigami@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      16 hours ago

      Real answer: AI alignment is a very difficult and fundamentally unsolved problem. Whole nonprofits (“institutes”) have popped up with the purpose of solving AI alignment. It’s not getting solved (ever, IMO).

      • jpeps@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        16 hours ago

        I think OP knows this. It’s an unsolvable problem. The conclusion from that might be that this tech shouldn’t be 2 clicks away from every teen, or even person’s, hand.

      • BussyGyatt@feddit.org
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        7
        ·
        19 hours ago

        i know it’s offensive to see people censor themselves in that way because of tiktok, but try to remember there’s a human being on the other side of your words.

      • yermaw@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        11 hours ago

        Must have been on reddit a long time, I got banned for saying kill like 3 times. None of them in a mean-spirited or call-to-action context.

        Self censoring is hard to deprogram yourself out of, and by the time theyre comfortable with freedom of language again who’s to say it won’t be the same story here?