The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.
The same jobs that get annoyed when the see AI generated CVs.
Senior Boomer executives have no fucking clue what AI is, but need to implement it to seem relevant and save money on labor. Already they are spending more on errors, as they swallow all the hype from billionaire tech bros they worship.
Managers love yes-men so the more biased the better
Yeah, I have some background in History and ChatGTP will be objectively wrong with some things. Then I will tell it is wrong because X, Y and Z, and then the stupid thing will come back with, “Yes, you are right, X, Y, Z were a thing because…”.
If I didn’t know that it was wrong, or if say, a student took what it said at face value, then they too would now be wrong. Literal misinformation.
Not to mention the other times it is wrong, and not just chatGTP because it will source things like Reddit. Recently Brave AI made the claim that Ironfox the Firefox fork was based on FF ESR. That is impossible since Ironfox is a fork for Android. So why was it wrong? It quoted some random guy who said that on Reddit.
I get the feeling that you’re missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.
Yes, I know this. I assume that was a given. The point is that it is marketed and sold to people as an one stop shop of convenience to searching. And that tons of people believe that. Which is very dangerous. You misunderstood.
My point is not to point out whether it knows it is right or wrong. Within that context it is just an extremely complex calculator. It does not know what it saying itself.
My point was, that aside the often cooked-in bias, of how often, or the propensity of often they are wrong as a search engine. And that many people do not tend to know that.
I run my course exams in biochemistry through AI chat sites, and these sites are curiously doing worse than two years ago. I think there is an active campaign by activists to feed AI misinformation. But the biggest problem for STEM applications is that if there has been a new discovery that changes paradigms, AI still quotes older incorrect outdated paradigms because of the mass of that text on the web.