• FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      3 days ago

      Because LLMs see tokens, not letters or words. It’s like showing a human a strawberry and asking them how many atoms it contains.

      • Noxy@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        3 days ago

        Sounds like a genuine shortcoming of the technology as it’s being presented to forced on the public

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          7
          arrow-down
          5
          ·
          3 days ago

          An LLM also can’t bake a cake, decorate a Christmas tree, or bench-press 100kg.

          Just understand what LLMs are good at, use them for that, and don’t throw your hands up and declare it useless because it can’t magically do something it was never designed to do in the first place.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              4
              arrow-down
              3
              ·
              3 days ago

              I’ve never seen anyone advertising an LLM as being good at spelling bees. The only time I ever see this spelling thing come up is when people are making fun of it.

              • yuri@pawb.social
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                3 days ago

                they’re presented as general knowledge chatbots at the very least, and i know i’d consider spelling pretty general knowledge.

                the way i see it you can either acknowledge the “strawberry question” as a genuine failing of most every publicly accessible LLM, or you can acknowledge that LLMs are only ever actually correct by pure chance. sometimes it’s a REALLY GOOD chance, but at the end of the day it’s still always a variable that you can’t actually control.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  3 days ago

                  You see a false dichotomy.

                  I see someone pounding away at a ball of yarn with a hammer and complaining that it’s not as good a knitting implement as they imagined.

                  • TWeaK@feddit.uk
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 day ago

                    You have someone complaining about what people selling AI say it can do, when it can’t do that. You see people complaining that AI can’t do things, when it can do other things.

                    You need to try and digest what people are saying better rather than just being contrarian.

                  • yuri@pawb.social
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    arrow-down
                    1
                    ·
                    3 days ago

                    in this thread i’ve only seen complaints about the implementation, no one has even implied LLM’s are useless.

      • huppakee@feddit.nl
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        It will generate a new answer in every new chat, it has no knowledge of itself. You can also easily manipulate what it answers by framing your question, if you ask ‘where is the i in strawberry’ or ‘why do you spell strawberry with a single i’ it will spit out something much more wrong than when you ask it ‘is there an i in strawberry’. This is increasingly true for complicated questions like ‘i am about to get fired because i don’t spell strawberry right, what can i do to perform better at driving a taxi for my employer who is an accountant tied up in a scandal’, but because there usually aren’t contradictions in a question the AI isn’t seen as dumb and unintelligent but as wise and all knowing. But again, it doesn’t know anything it just puts words that statistically fit well together next to each other - which can be really useful if you understand its limits.