Show me the chat logs. Once you “trick” it, it’s your own fault.
That said, there should be more warning messages within the chat window. Even if it doesn’t stop answering, a ⚠️ should be fixed to the screen with a get human help button.
However, with Trump killing suicide hotlines, I don’t know who will help. It is not OpenAI’s responsibility
So, movies, music, books? All of those have the potential to destroy mental health. You just don’t like AI. If you don’t want It to destroy your mental health, don’t use it for your mental health. It is a calculator, and nothing more.
I don’t care if openai lose all their money but this ruling would also effect open source AI.
If somebody releases a AI why would they be liable for how people decide to use it? Its software and like any other program its the user’s choice on how to use it.
If i decide to run rm / --no-preserve-root is gnu then responsible to fix it?
AI is already very censored, and if makers become liable for what people do with their AI they will become hyper censored and performance will go down the drain.
Don’t known don’t really care. A child is dead this isnt a “won’t somebody please think of the children” thing, multiple people have had complete mental breakdowns and will never be the same.
How is this not a “won’t somebody please think of the children” thing?
Yes it is terrible this has happend, but there is a way to prevent children from accessing AI and its called parenting.
Kids shouldn’t be using AI if it harms them, kids can’t make this choice so it should be made for them. Same with alcohol, same with porn, same with the other things restricted to children.
But that doesn’t mean responsible adults shouldn’t be able to use it, but “won’t somebody please think of the children” litigation will make that impossible.
Show me the chat logs. Once you “trick” it, it’s your own fault.
That said, there should be more warning messages within the chat window. Even if it doesn’t stop answering, a ⚠️ should be fixed to the screen with a get human help button.
However, with Trump killing suicide hotlines, I don’t know who will help. It is not OpenAI’s responsibility
Idk I think if you make something that destroys mental health it becomes your responsibility to fix it.
So, movies, music, books? All of those have the potential to destroy mental health. You just don’t like AI. If you don’t want It to destroy your mental health, don’t use it for your mental health. It is a calculator, and nothing more.
It’s not even a calculator, it’s a “what word is the most likely to come next” machine
Pedantically, it calculates that word 😇
I don’t care if openai lose all their money but this ruling would also effect open source AI.
If somebody releases a AI why would they be liable for how people decide to use it? Its software and like any other program its the user’s choice on how to use it.
If i decide to run rm / --no-preserve-root is gnu then responsible to fix it?
AI is already very censored, and if makers become liable for what people do with their AI they will become hyper censored and performance will go down the drain.
Don’t known don’t really care. A child is dead this isnt a “won’t somebody please think of the children” thing, multiple people have had complete mental breakdowns and will never be the same.
How is this not a “won’t somebody please think of the children” thing?
Yes it is terrible this has happend, but there is a way to prevent children from accessing AI and its called parenting.
Kids shouldn’t be using AI if it harms them, kids can’t make this choice so it should be made for them. Same with alcohol, same with porn, same with the other things restricted to children.
But that doesn’t mean responsible adults shouldn’t be able to use it, but “won’t somebody please think of the children” litigation will make that impossible.