• undefinedTruth@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    You give it too much credit. In order to be able to lie it would first need do be actually capable of understanding what it writes. LLMs are text prediction algorithms. They cannot think.

  • burble@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    The amount of times I’ve gone to look up a topic that I know something about and see something wrong in the AI summary that I didn’t ask for…

  • QualifiedKitten@discuss.online
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I’ve turned off the AI summaries, but occasionally ask one of DDG’s AIs a question, and it almost always has blatant errors in the responses. Yesterday, I did a manual search first, then asked 2 of the AI models if Arm & Hammer currently sells any non-clumping clay litters. It gave me a couple products that it claimed were non-clumping, but when I pulled up product listings to buy them, they were all very clearly labeled as clumping.

    Makes it really hard to trust AI for things I don’t know when they’re so often so obviously wrong about things I do know and can easily verify.

    • sakuraba@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Just the action of writing non-clumping to the LLM will trigger it to focus on clumping most of the times and will give you that type of results

      It is not intelligent at all lol