An LLM also can’t bake a cake, decorate a Christmas tree, or bench-press 100kg.
Just understand what LLMs are good at, use them for that, and don’t throw your hands up and declare it useless because it can’t magically do something it was never designed to do in the first place.
I’ve never seen anyone advertising an LLM as being good at spelling bees. The only time I ever see this spelling thing come up is when people are making fun of it.
they’re presented as general knowledge chatbots at the very least, and i know i’d consider spelling pretty general knowledge.
the way i see it you can either acknowledge the “strawberry question” as a genuine failing of most every publicly accessible LLM, or you can acknowledge that LLMs are only ever actually correct by pure chance. sometimes it’s a REALLY GOOD chance, but at the end of the day it’s still always a variable that you can’t actually control.
You have someone complaining about what people selling AI say it can do, when it can’t do that. You see people complaining that AI can’t do things, when it can do other things.
You need to try and digest what people are saying better rather than just being contrarian.
It will generate a new answer in every new chat, it has no knowledge of itself. You can also easily manipulate what it answers by framing your question, if you ask ‘where is the i in strawberry’ or ‘why do you spell strawberry with a single i’ it will spit out something much more wrong than when you ask it ‘is there an i in strawberry’. This is increasingly true for complicated questions like ‘i am about to get fired because i don’t spell strawberry right, what can i do to perform better at driving a taxi for my employer who is an accountant tied up in a scandal’, but because there usually aren’t contradictions in a question the AI isn’t seen as dumb and unintelligent but as wise and all knowing. But again, it doesn’t know anything it just puts words that statistically fit well together next to each other - which can be really useful if you understand its limits.
Why do all of them fail this question?
Because LLMs see tokens, not letters or words. It’s like showing a human a strawberry and asking them how many atoms it contains.
Sounds like a genuine shortcoming of the technology as it’s being
presented toforced on the publicAn LLM also can’t bake a cake, decorate a Christmas tree, or bench-press 100kg.
Just understand what LLMs are good at, use them for that, and don’t throw your hands up and declare it useless because it can’t magically do something it was never designed to do in the first place.
but it’s being sold as if it IS capable of that.
I’ve never seen anyone advertising an LLM as being good at spelling bees. The only time I ever see this spelling thing come up is when people are making fun of it.
they’re presented as general knowledge chatbots at the very least, and i know i’d consider spelling pretty general knowledge.
the way i see it you can either acknowledge the “strawberry question” as a genuine failing of most every publicly accessible LLM, or you can acknowledge that LLMs are only ever actually correct by pure chance. sometimes it’s a REALLY GOOD chance, but at the end of the day it’s still always a variable that you can’t actually control.
You see a false dichotomy.
I see someone pounding away at a ball of yarn with a hammer and complaining that it’s not as good a knitting implement as they imagined.
You have someone complaining about what people selling AI say it can do, when it can’t do that. You see people complaining that AI can’t do things, when it can do other things.
You need to try and digest what people are saying better rather than just being contrarian.
in this thread i’ve only seen complaints about the implementation, no one has even implied LLM’s are useless.
So close
Edit’: lol
Looks like Lumo had too much catnip.
They don’t fail it tho?
It will generate a new answer in every new chat, it has no knowledge of itself. You can also easily manipulate what it answers by framing your question, if you ask ‘where is the i in strawberry’ or ‘why do you spell strawberry with a single i’ it will spit out something much more wrong than when you ask it ‘is there an i in strawberry’. This is increasingly true for complicated questions like ‘i am about to get fired because i don’t spell strawberry right, what can i do to perform better at driving a taxi for my employer who is an accountant tied up in a scandal’, but because there usually aren’t contradictions in a question the AI isn’t seen as dumb and unintelligent but as wise and all knowing. But again, it doesn’t know anything it just puts words that statistically fit well together next to each other - which can be really useful if you understand its limits.