• JohnEdwa@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 hours ago

    That is one of the fundamental flaws of machine learning like this, the way they are trained means they end up always trying to agree with the user, because not doing so is taken as being a “wrong” answer. That is why they hallucinate answers too - because “I don’t know” is not an acceptable answer, but generating something plausible that the user takes as truth works.
    You then have to manually try to reign them in and prevent them from talking about things you don’t want them to, but they are trivially easy to fool. IIRC, in one of these suicide cases the LLM did refuse to talk about suicide, until the user told it it was all just for a fictional story. And you can’t really “fix” that without completely banning it from talking about those things in every single occasion, because someone will find a way around it eventually.

    And yeah, they don’t care, because they are essentially just predictive text algorithms turned up to 11. Chatbots like ChatGPT and other LLMs are an excellent application of both meanings of the word “Artificial Intelligence” - they emulate human intelligence by faking being intelligent, when they in reality are not.