• Noxy@pawb.social
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      3 days ago

      how does one even know and verify what an LLM’s “sources” are? wouldn’t it just vomit out whatever response and then find “sources” that happen to match its stupid output after the fact?

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Precisely my point. But if it is correct and can link to an authoritative source (eg. a news article), that is relatively easy to verify.

        How much you can trust a news article is still up for debate.

        • Noxy@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          so then all the value it brings is the exact same thing search engines have already been doing for decades.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        Agreed, I’m more worried about people blindly trusting AI than I am about this particular situation.