When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.
Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.



It’s talking about AI users. Users
Yes. “Whenever possible”. It doesn’t mean “always”.
Frequency of use it doesn’t interfere with what they are trying to measure: whether users consider the possibility of inaccurate answers, or whether they don’t.
If frequency of use is taken into account, and they are only considering users who regularly use ai, then people who try to avoid using ai isn’t part of the data pool. These people belong to the minority we established irrelevant to the study.
If, however, they are still surveying people who rarely use ai as well as frequent users, these people can still belong to either of the two categories they are studying: those who generally consider the possibility of receiving inaccurate answers, and those who don’t.
Previously you said there are more groups of people which prove the dicothomy to be false, but I fail to see it that way.
For the study that’s fine. I never argued the study should have more groups.
It’s the article that should be more precise. An article that starts with “there are these two groups” for a study that simply studied these two groups but never said there weren’t more is wrong. So that’s bad writing.