• Leon@pawb.social
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      edit-2
      5 months ago

      That’s a misrepresentation of what LLMs do. You feed them a fuckton of data and they, to oversimplify it a bit, put these concepts in a multi-dimensional map. Then based on input, it can give you an estimation of an output by referencing said map. It doesn’t search for anything, it’s just mathematics.

      It’s particularly easy to demonstrate with image models, where you could take two separate concepts, like say “eskimo dog” and “daisy” and add them together.

      When you query ChatGPT for something and it “searches” for it, it’s either fitted enough that it can reproduce a link directly, or it calls a script that performs a web search (likely using Bing) and compiles the result for you.

      You could do the same, just using an actual search engine.

      Hell, you could build your own “AI search engine” with an open weights model and a little bit of time.

        • chloroken@lemmy.ml
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          5 months ago

          “Accurate” and “accurate enough” have completely different meanings. Calculators are not “accurate enough”, they are accurate, and the idea that you’re conflating the two notions is exactly why LLMs are useless for most things people employ them for.

            • chloroken@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 months ago

              You are indeed conflating the two ideas, and I said “useless for most things they’re utilized for”, but if you quoted the entire sentence your argument would fall apart and you realized that.