

Look, I’m not trying to argue against your moral stance. I’m neither saying it’s wrong nor that it’s outweighed by any usefulness, real or not. What I’m trying is get you to see that your claims about uselessness are undermining your moral argument, which would be a hell of a lot stronger if you were not hell-bent on denying any kind of utility! Because in the eyes of people that do perceive LLMs as useful (which is exactly the kind of people that need to hear about the moral issues), that just makes you seem out of touch and not worth listening to.
It’s useless for security analysis.
Have you looked at any of the four links I provided? You might be working on old data here because it’s a very recent development, but a lot of high profile open source maintainers are saying that AI-generated security reports are now generally pretty good and not slop anymore. They’re fixing actual bugs because of it, and more than ever. How can you call that useless?
Surely, the energy cost to verify the translation would be the same as translating it?
Uh, no? Have you ever translated something? Verifying a translation happens mostly at attentive reading speed, double it for probably reading it twice overall to focus on content and grammar separately, plus some overhead for correcting the occasional flaw and checking one or two things that I’m unsure about from the top of my head, so for the sake of argument let’s say three times slower than just reading normally. I don’t know about you, but three times slower than reading is still a lot faster than I would be able to produce a translation from scratch, weighing different word options against each other, how to get some flow into the reading experience, etc. If I’m translating into a language that I’m fluent but not native in that takes even longer, because the ratio between my passive and active vocabulary is worse. I can read (and thus verify) English at a much more sophisticated level than I’m able to talk or write, because the words and native idioms just don’t come to me as naturally, or sometimes even at all without a lot of mental effort and a Thesaurus. LLMs are just plain better at writing English than I have any hope of achieving in my lifetime, and I can still fully understand and verify the factual, orthographic and grammatical correctness of what they’re outputting easily. Those two things are not mutually exclusive.
It’s useless for rhyming (I notice you didn’t mention that one)
Yeah, because I’m focusing on the more relevant things. I disagree that it’s completely useless for rhyming, but it is a much weaker and more contrived point than the others, and going into that discussion would just derail things more for no added value. Also, funny that you call me out for that, when you just fully ignored two use cases I mentioned in my initial comment (LLM proofreading texts, and answering questions about unfamiliar code bases). Those have a lot of legitimate utility for someone who’s not aware of or doesn’t care about the moral issues. And once again, that’s my point here - those people will not listen if they perceive you as talking about a fictional world where LLMs are completely useless, which fails to match up with their experience.













No, that’s not what I’m saying. I’m saying that if someone wants their argument to be taken seriously, they should be willing to reevaluate parts of it that they’re very obviously wrong about, especially if, by their own admission, those parts don’t even matter in the face of the rest of the argument.
I’m just fed up with people feeling the need to have strong opinions on everything, even if they don’t actually know much about it. It’s fine if you don’t know anything about how capable current LLMs actually are. Especially as an opponent of LLMs for moral reasons, it makes total sense that you’d just be avoiding them and thus not really be that informed. It does not in any way weaken your argument. As long as you seem to have a good grip on what you know and what you don’t know, it’s all good. But being confidently wrong about things and refusing to reevaluate when getting pushback on that just signals that you neither know nor care about the limits of your own knowledge, and makes the entirety of your argument untrustworthy.