Pro@programming.dev to Technology@lemmy.worldEnglish · 21 hours agoDo chatbots have a moral compass? Researchers turn to Reddit to find out.news.berkeley.eduexternal-linkmessage-square7fedilinkarrow-up13arrow-down149file-text
arrow-up1-46arrow-down1external-linkDo chatbots have a moral compass? Researchers turn to Reddit to find out.news.berkeley.eduPro@programming.dev to Technology@lemmy.worldEnglish · 21 hours agomessage-square7fedilinkfile-text
minus-squareMarshezezz@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up18arrow-down1·21 hours agoNo, they do what they’ve been programmed to do because they’re inanimate
minus-squareElectricblush@lemmy.worldlinkfedilinkEnglisharrow-up9·16 hours agoA better headline would be that they analyzed the embedded morals in the training data… but that would be far less click bait…
minus-squareMarshezezz@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2·16 hours agoThey’ve created a dilemma for themselves cos I won’t click on anything with a clickbait title
minus-squareCousin Mose@lemmy.hogru.chlinkfedilinkEnglisharrow-up3arrow-down1·15 hours agoRight? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.
No, they do what they’ve been programmed to do because they’re inanimate
A better headline would be that they analyzed the embedded morals in the training data… but that would be far less click bait…
They’ve created a dilemma for themselves cos I won’t click on anything with a clickbait title
Right? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.