China #1
Best friends with the mods at c/worldnews@lemmy.ml

  • 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle








  • But, like a human, it mostly tries to stick to the truth. It does get things wrong, and in that way is more like a 5 year old, because it won’t understand that it is fabricating things, but there is a moral code that they are programmed with, and they do mostly stick to it.

    To write off an LLM as a glorified chatbot is disingenuous. They are capable of produce everything that a human is capable of, but in a different ratio. Instead of learning everything slowly over time and forming opinions based on experience, they are given all of the knowledge of humankind and told to sort it out themselves. Like a 5 year old with an encyclopedia set, they are gonna make some mistakes.

    Our problem is that we haven’t found the right ratios for them. We aren’t specializing the LLMs enough to make sure they have a limited enough library to pull from. If we made the datasets smaller and didn’t force them into “chatbot” roles where they are given carte Blanche to say whatever they say, LLMs would be in a much better state than they currently are.