Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 0 Posts
  • 55 Comments
Joined 22 days ago
cake
Cake day: July 17th, 2025

help-circle
  • I hear you - you’re reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.

    But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It’s a side effect of having been trained on a lot of correct information, not the result of human-like understanding.

    So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.



  • I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.

    An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.




  • You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it. That’s not an argument, that’s posturing.

    From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.

    You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.



  • It doesn’t understand things the way humans do, but saying it doesn’t know anything at all isn’t quite accurate either. This thing was trained on the entire internet and your grandma’s diary. You simply don’t absorb that much data without some kind of learning taking place.

    It’s not a knowledge machine, but it does have a sort of “world model” that’s emerged from its training data. It “knows” what happens when you throw a stone through a window or put your hand in boiling water. That kind of knowledge isn’t what it was explicitly designed for - it’s a byproduct of being trained on data that contains a lot of correct information.

    It’s not as knowledgeable as the AI companies want you to believe - but it’s also not as dumb as the haters want you to believe either.