I can see someone in the future watching a program run and asking “wow, is that ai? A PERSON typed those cryptic letters? No way!”

  • howrar@lemmy.ca
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    4 days ago

    LLMs are necessarily non-deterministic,

    There’s nothing about LLMs that force them to be non-deterministic. Given the same prompts and context, they will always output the same probability distribution. It’s then up to you what you decide to do with that distribution. If you decide to always choose the most likely output, then the entire thing will be deterministic. We just don’t do that because it’s less useful than stochastic output.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      Yes, that’s why it’s necessary for it to be non-deterministic. Without non-determinism, there is no error recovery if it chooses the wrong token somewhere in the middle of the completion. That’s much less useful.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 days ago

          So you can try your prompt again. Also to avoid getting stuck in loops of repeated text. Getting stuck down a bad line of “reasoning”. Etc.

          Low chance of error comes with low chance of error recovery, conversely high chance of error comes with high error recovery ability (mostly just talking about temperature and top k, here)