When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.
Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.



Oh, so we’ve amalgamated all the Facebook conspiracy theories with 4chan conspiracy theories, along with whatever % garbage political messagin,g everyone’s major religious texts, and basically the sum off all art, knowledge, and advertising that was available on the Internet at the time.
Of I were real honest, I’d say that last one is the one that really bothers me. The vast majority of our modern media is dumb ads. Really, since the Victorian era. My unscientific guess is that the bulk of modern media is designed to weedle past your logic to make your emotions want to buy various petroleum products.
And out off all that mess we’re expecting what?