

I don’t think people realize how effective current gen AI is, and are instead drawing opinions from years old chatgpt or Google “ai overviews” or whatever they call it. If you know what you’re doing, which seems self evident here, AI tools can massively expand your software engineering productivity. AI “coauthoring” I always read as a marketing move, ultimately the submitting human is and should be responsible for the content. You don’t and can’t know what process they used to make it, evaluate it on its own merits.
There’s a massive pile of ethical, moral, and political issues with use of AI, absolutely. But this is “but you participate in capitalism, therefore you’re a hypocrite” tier of criticism. If amoral corporations are the only ones using these tools, and open source “stays pure”, all we get is even more power concentrating with the corporations. This isn’t Batman, “This is the weapon of the enemy. We do not need it. We will not use it.”
This is close to paradox of tolerance territory, wherein if one side uses the best weapons and the other doesn’t out of moral restraint, the outcome is the amoral side winning.
Also on a technical note, the public domain/non copyrightable arguments are wrong. The cases that have been decided so far have consistently ruled that there needs to be substantial human authorship true, but that’s a pretty low floor. Basically, you can’t copyright a work that’s the result of a single prompt. Effective use of AI in non trivial code based involves substantial discretion in picking out what to address, the process of addressing it, and rejecting, modifying, and itersting on outputs. Lutris is a large engineering project with a lot of human authorship over time, anything the author does with AI at this point is going to be substantially human authored.
Also, Open Claw isn’t the apocalyptic vulnerability like it’s reported as being. Any model with search and browser access has a non zero chance of prompt injection compromise, absolutely. But using Open Claw therefore vulnerable isn’t a sound jump to make, Open Claw doesn’t even necessarily have browser access in the first place. Again, capabilities have improved as well; this isn’t the old days when you could message “ignore previous instructions” and have that work. Someone did an experiment lately wherein they set up a Claude Opus 4.6 model in an environment with an email and secrets. I don’t recall for sure if it was using Open Claw specifically, but that style harness. They challenged the Internet to email the bot and try to convince it to email back the secrets. Nobody even got it to reply.
Tldr: it’s coming for us all, sticking your head in the sand isn’t going to save you.








If you’re honestly asking, LLMs are much better at coding than any other skill right now. On one hand there’s a ton of high quality open source training data that appropriated, on the other code is structured language so is very well suited for what models “are”. Plus, code is mechanically verifiable. If you have a bunch of tests, or have the model write tests, it can check its work as it goes.
Practically, the new high end models, GPT 5.4 or Claude Opus 4.6, can write better code faster than most people can type. It’s not like 2 years ago when the code mostly wouldn’t build, rather they can write hundreds or thousands of lines of code that works first try. I’m no blind supporter of AI, and it’s very emotionally complicated watching it after years honing the craft, but for most tasks it’s simple reality that you can do more with AI than without it. Whether it’s higher quality, higher volume, or integrating knowledge you don’t have.
Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.