I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.
It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.


How do you feel about LLMs such as chat gpt being used to find birth defects, problematic readings in radiology, design flaws in architecture/engineering or performance bottlenecks in code?
LLMs do not most of the things that you say (like radiology readings, birth defects…) Those are language models. It’s on the name.
What you’re thinking are generally NN, trained to categorize those things in particular. You can give those tasks to ChatGPT and will hallucinate an answer that somebody who doesn’t know would feel correct.
The fact that everything is labeled AI makes it so that people like you greatly overestimate what ChatGPT does to idiots.
Design flaws in engineering? You have a source for that? (Practical, not some experimental PR stunt)
Those aren’t LLMs. Except maybe the code one.