• oyo@lemmy.zip
    link
    fedilink
    English
    arrow-up
    20
    ·
    10 hours ago

    We’ll almost certainly get to AGI eventually, but not through LLMs. I think any AI researcher could tell you this, but they can’t tell the investors this.

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Also not likely in the lifetime of anyone alive today. It’s a much harder problem than most want to believe.

      • scratchee@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Possible, but seems unlikely.

        Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

        If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).

        So yeah. My money is that we’ll figure it out sooner or later.

        Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.

    • ghen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      Once we get to AGI it’ll be nice to have an efficient llm so that the AGI can dream. As a courtesy to it.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        7 hours ago

        Calling the errors “hallucinations” is kinda misleading because it implies there’s regular real knowledge but false stuff gets mixed in. That’s not how LLMs work.

        LLMs are purely about word associations to other words. It’s just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it’s trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.

        All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an “end” token.

        Earlier on when using LLMs, I’d ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn’t do. Its capabilities don’t actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn’t even have to reflect how it really works.

        • nialv7@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          edit-2
          2 hours ago

          Well, you described pretty well what llms were trained to do. But from there you can’t derive how they are doing it. Maybe they don’t have real knowledge, or maybe they do. Right now literally no one can definitively claim one way or the other, not even top of the field ML researchers. (They may have opinions though)

          I think it’s perfectly justified to hate AI, but it’s better to have a less biased view of what it is.