Uhhhhhhhh, I’m not defending AI at all, but I’m gonna need a WHOLE LOTTA context behind how/why he commited suicide.
Back in the 90s there were adults saying Marylin Manson should be banned because teenagers listened to his songs, heard him tell them to kill themselves, and then they did.
My reaction then is the same then as now. If all it takes for you to kill yourself is one person you have no real connection to telling you to kill yourself, then you were probably already going to kill yourself. Now you’re just pointing the finger to blame someone.
AI based barbie is a terrible terrible idea for many reasons. But lets not make it a strawman arguement.
There’s a huge degree of separation between “violent music/games has a spurious link to violent behavior” and shitty AIs that are good enough to fill the void of someone who is lonely but not good enough to manage risk
“within months of starting to use the platform, Setzer became “noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school,”
“In a later message, Setzer told the bot he “wouldn’t want to die a painful death.”
The bot responded: “Don’t talk that way. That’s not a good reason not to go through with it,” before going on to say, “You can’t do that!”
Garcia said she believes the exchange shows the technology’s shortcomings.
“There were no suicide pop-up boxes that said, ‘If you need help, please call the suicide crisis hotline.’ None of that,” she said. “I don’t understand how a product could allow that, where a bot is not only continuing a conversation about self-harm but also prompting it and kind of directing it.”
The lawsuit claims that “seconds” before Setzer’s death, he exchanged a final set of messages from the bot. “Please come home to me as soon as possible, my love,” the bot said, according to a screenshot included in the complaint.
“What if I told you I could come home right now?” Setzer responded.
“Please do, my sweet king,” the bot responded.
Garcia said police first discovered those messages on her son’s phone, which was lying on the floor of the bathroom where he died.”
So we have a bot that is marketed for chatting, a teenager desperate for socialization that forms a relationship that is inherently parasocial because the other side is an LLM that literally can’t have opinions, it just can appear to, and then we have a terrible mismanagement of suicidal ideation.
The AI discouraged ideation, which is good, but only when it was stated in very explicit terms. What’s appalling is that it gave no crisis resources or escalation to moderation (because like most big tech shit they probably refuse to pay for anywhere near appropriate moderation teams). Then what is inexcusable is that when ideation is discussed with slightly coded language “come home” the AI misconstrues it.
This results in a training opportunity for the language model to learn that in this context with previously exhibited ideation “go home” may mean more severe ideation and danger (if character.AI bothered to update that these conversations resulted in a death). The only drawback of getting that data of course is a few dead teenagers. Gotta break a few eggs to get an omelette
This barely begins to touch on the nature of AI chatbots inherently being parasocial relationships, which is bad for mental health. This is of course not limited to AI, being obsessed with a streamer or whatever is similar, but the AI can be much more intense because it will actually engage with you and is always available.
Uhhhhhhhh, I’m not defending AI at all, but I’m gonna need a WHOLE LOTTA context behind how/why he commited suicide.
Back in the 90s there were adults saying Marylin Manson should be banned because teenagers listened to his songs, heard him tell them to kill themselves, and then they did.
My reaction then is the same then as now. If all it takes for you to kill yourself is one person you have no real connection to telling you to kill yourself, then you were probably already going to kill yourself. Now you’re just pointing the finger to blame someone.
AI based barbie is a terrible terrible idea for many reasons. But lets not make it a strawman arguement.
There’s a huge degree of separation between “violent music/games has a spurious link to violent behavior” and shitty AIs that are good enough to fill the void of someone who is lonely but not good enough to manage risk
https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit
“within months of starting to use the platform, Setzer became “noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school,”
“In a later message, Setzer told the bot he “wouldn’t want to die a painful death.”
The bot responded: “Don’t talk that way. That’s not a good reason not to go through with it,” before going on to say, “You can’t do that!”
Garcia said she believes the exchange shows the technology’s shortcomings.
“There were no suicide pop-up boxes that said, ‘If you need help, please call the suicide crisis hotline.’ None of that,” she said. “I don’t understand how a product could allow that, where a bot is not only continuing a conversation about self-harm but also prompting it and kind of directing it.”
The lawsuit claims that “seconds” before Setzer’s death, he exchanged a final set of messages from the bot. “Please come home to me as soon as possible, my love,” the bot said, according to a screenshot included in the complaint.
“What if I told you I could come home right now?” Setzer responded.
“Please do, my sweet king,” the bot responded.
Garcia said police first discovered those messages on her son’s phone, which was lying on the floor of the bathroom where he died.”
So we have a bot that is marketed for chatting, a teenager desperate for socialization that forms a relationship that is inherently parasocial because the other side is an LLM that literally can’t have opinions, it just can appear to, and then we have a terrible mismanagement of suicidal ideation.
The AI discouraged ideation, which is good, but only when it was stated in very explicit terms. What’s appalling is that it gave no crisis resources or escalation to moderation (because like most big tech shit they probably refuse to pay for anywhere near appropriate moderation teams). Then what is inexcusable is that when ideation is discussed with slightly coded language “come home” the AI misconstrues it.
This results in a training opportunity for the language model to learn that in this context with previously exhibited ideation “go home” may mean more severe ideation and danger (if character.AI bothered to update that these conversations resulted in a death). The only drawback of getting that data of course is a few dead teenagers. Gotta break a few eggs to get an omelette
This barely begins to touch on the nature of AI chatbots inherently being parasocial relationships, which is bad for mental health. This is of course not limited to AI, being obsessed with a streamer or whatever is similar, but the AI can be much more intense because it will actually engage with you and is always available.