• Hemingways_Shotgun@lemmy.ca
    link
    fedilink
    English
    arrow-up
    61
    ·
    1 day ago

    The fact that any AI company thought to train their LLM on the answers of Reddit users speaks to a fundamental misunderstanding of their own product (IMO)

    LLMs aren’t programmed to give you the correct answer. They’re programmed to give you the most pervasive/popular answer on the assumption that most of the time that will also happen to be the right one.

    So when you’re getting your knowledge base from random jackasses on Reddit, where a good faith question like “What’s the best way to get get gum out of my childs hair” get’s two two good faith answers, and then a few dozen smart-ass answers that gets lots of replies and upvotes because they’re funny. Guess which one your LLM is going to use.

    People (and apparently even the creators themselves) think that an LLM is actually cognizent enough to be able to weed this out logically. But it can’t. It’s not an intelligence…it’s a knowlege agreggator. And as with any aggregator, the same rule applies

    garbage in, garbage out

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      ·
      11 hours ago

      The main thing that AI has shown, is how much bullshit we subconsciously filter through every day without much effort. (Although clearly some people struggle a lot more with distinguishing between bullshit and fact, considering how much politicized nonsense has taken hold.)

      • Hemingways_Shotgun@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        Exactly that.

        If I were to google how to get gum out of my child’s hair and then be directed to that same reddit post. I’d read through it and be pretty sure which were jokes and which were serious; we make such distinctions, as you say, every day without much effort.

        LLMs simply don’t have that ability. And the number of average people who just don’t get that is mind-boggling to me.

        I also find it weirdly dystopian that, if you sum that up, it kind of makes it sound like in order for an LLM to make the next step towards A.I. It needs a sense of humour. It needs the ability to weed through when the information it’s digging from is serious, or just random jack-asses on the internet.

        Which is turning it into a very very Star Trek problem.

    • dil@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 hours ago

      Usually the third/fifth comment down is correct while the top 4 are jokes

    • bridgeenjoyer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      1 day ago

      Thats why I have stopped calling it ai. Its a dumbass buzzword just like cloud, that tech bros like to use but cant explain (or blockchain).

      Its llms, and image generators/OCR (which has been around for decades), Using complex markov chains and a fuck ton of graphics cards. NOT AI. NOT AI.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        18 hours ago

        It is AI, along with a bunch of optimization algorithms, statistical decision trees (probably used in adaptive AI in games), etc. AI is a field in computer science that includes a ton of things many wouldn’t consider AI.

        Basically, if the solution doesn’t come from direct commands but instead comes from some form of learning process, it’s probably AI.

        It’s not “general AI”, but it is in the field of AI.

        • enbipanic@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 hours ago

          I would argue we need to go back to Machine Learning.

          The field is machine learning, generative machine learning etc.

          This rebrand to AI is doing nothing but confusing people and building investor hype

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            Back? Machine Learning has always been a subfield of artificial intelligence since it all started in the 1950s or so. The end goal is to create general AI, and each field in AI is considered a piece of that puzzle, including LLMs.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                18 minutes ago

                It’s more specific, sure, but there’s nothing dishonest about using the same terminology that has been used for almost 100 years.

                The disconnect is that average people have a different understanding of the term than is used in computer science, probably because of sci-fi films and whatnot. When I hear “AI,” I think of the CS term, because that’s my background, but when my family hears “AI,” they think of androids and whatnot like in Bicentennial Man.

                I don’t know how to square that circle. Neither group here is wrong, but classifying something like ChatGPT as “AI,” while correct, is misinterpreted by the public, who assume it’s doing more than it is.

      • filcuk@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        19 hours ago

        I’ve had to start calling it AI because most people don’t know what LLMs are, and noone cares to go through the explanation, including myself.
        I’m afraid that, as these things go, AI has gained a new meaning by popular use, rather than the original meaning of the acronym.
        No point fighting it anymore.

        Maybe we just need to adjust and start saying GAI (generative ai). It has a nice ring to it too.

        • bridgeenjoyer@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          18 hours ago

          If people are so dumb they dont even know what an llm is, they have no business using any “ai” products. Too bad we cant ban dumb people from using tech that will make them dumber.