The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • mysticpickle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    34
    ·
    1 day ago

    I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They’re just lashing out.

    • AstralPath@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      8 hours ago

      You hate to say it because you know this is a ridiculous take. There’s no fucking way that the parents are “more at fault” for their son’s death than the company whose product encouraged him to hide his feelings from his parents and coached him on how to commit suicide.

      Read the lawsuit filing. https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf

      *I have excellent parents and even they were not privy to the depths of my emotions as a kid. * You are actively choosing to ignore the realities of childhood as well as parenthood to play some shitty devil’s advocate online.

    • benignintervention@lemmy.world
      link
      fedilink
      English
      arrow-up
      100
      arrow-down
      3
      ·
      1 day ago

      Your Undivided Attention discussed an important point missing from the article, which is that ChatGPT advised him to hide his activities and concerns from his parents. This doesn’t necessarily absolve the parents, but it does add a layer of nuance to the discussion

    • Sanctus@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      3
      ·
      1 day ago

      I agree, but a chatbot still shouldn’t help you write a suicide note or talk to you about methods of suicide. We all knew situations like this would arise when LLMs hit it big.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      2
      ·
      1 day ago

      It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.

      The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.

      It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.

      Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      3
      ·
      edit-2
      1 day ago

      I definitely do not agree.

      While they may not be entirely blameless, we have adults falling into this AI psychosis like the prominent OpenAI investor.

      What regulations are in place to help with this? What tools for parents? Isn’t this being shoved into literally every product in everything everwhere? Actually pushed on them in schools?

      How does a parent monitor this? What exactly does a parent do? There could have been signs they could have seen in his behavior, but could they have STOPPED this situation from happening as it was?

      This technology is still not well understood. I hope lawsuits like this shine some light on things and kick some asses. Get some regulation in place.

      This is not the parent’s fault and seeing so many people declare it just feels like apoligist AI hype.

      • Scipitie@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        4
        ·
        1 day ago

        I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

        As for your question: I won’t blame the parents here in the slightest because they will likely put more than enough blame on themselves. Instead I’ll try to keep it general:

        Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

        That’s independent of the technology.

        This is a big task because the border between normal puberty and behavior that warrants action is slim to non-existent.

        Overall I wish for way better education for parents both in terms of age appropriate patterns as well as what kind of help is available to them depending on their country and culture.

        • Spuddlesv2@lemmy.ca
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          23 hours ago

          They already had the kid in therapy. That suggests they were involved enough in his life to know he needed professional help. Other than completely removing his independence, effectively becoming his jailers, what else should they have done?

          • Scipitie@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            19 hours ago

            In the very first post on this thread I pointed out that I’m not talking about this specific case at all.

            • Spuddlesv2@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              14 hours ago

              Fair enough but in the post I replied to you did say you won’t blame the parents “here” in the slightest, which to me means “here in this specific case”.

        • audaxdreik@pawb.social
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          1 day ago

          I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

          I think you miss my point. I’m saying that adults, who should be capable of more mature thought and analysis, still fall victim to the manipulative thinking and dark patterns of AI. Meaning that children and teens obviously stand less of a chance.

          Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

          This is of course true for all parents in all situations. What I’m saying is that it is woefully inadequate to deal with the type and pervasiveness of the threat presented by AI in this situation.

          • Scipitie@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 day ago

            To your last point I fully agree!

            For the first point: that’s how I understood you - what I failed to convey: adultsshould fall victim more in cases like this because parents can be a protective shield of a kind that grown-ups lag.

            Children on their own stand easy less of a chance but are very rarely on their own.

            And to be honest I think it doesn’t change result of requirements for action both in general but respectfully for language based bots, both from a legal as well as an educational point of view.