• Dataprolet@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    2 days ago

    Even though ChatGPT ist neither of those things it should definitely not encourage someone to commit suicide.

      • TipsyMcGee@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.

        • peoplebeproblems@midwest.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 day ago

          Right, and because it’s a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.

          • TipsyMcGee@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Yeah, taking the service down is an acceptable solution, but do you think Open AI will do that on their own without outside accountability?

            • peoplebeproblems@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              I’m not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.

              As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.