- cross-posted to:
- OpenSource
- pcmasterrace@lemmit.online
- technology@lemmit.online
- cross-posted to:
- OpenSource
- pcmasterrace@lemmit.online
- technology@lemmit.online
I am the c/fuck_ai person but at this point I have made peace we can’t avoid it. I still don’t want it to do artsy stuff (image gen, video gen) and to blindly use it in critical stuff because humans are the ones that should be doing it or have constant oversight. I think the team’s logic is correct here, because there is no way to know if the code is from an LLM or a human unless something there screams LLM or the contributor explicitly mentions it. Mandating the latter seems like a reasonable move for now.
I consider myself to be more pro AI than not, but I’m certainly not a zealot and mostly agree with the take that it shouldn’t be used in artistic pursuits. However, I love using AI to help me create art. It can give great critiques, often good advice on how to improve, and is great for rapid experimentation and prototyping. I actually used it this weekend to see what a D&D mini might look like with different color schemes before painting it. I could have done the same with Gimp, but it would have taken much longer for worse results that was ultimately just for a brain storming session. How do you feel about my AI usage from your perspective? I suppose from an energy conservation perspective, all of it was bad, but I’m more interested in a less trivial take.
Yes the energy consumption is bad. My main gripe about LLM generated art is that it will not be original. It will use its training data from uncredited artworks to generate it. Art usually is made by humans to express something or convey something in a creative way. LLMs fail at that. What LLMs can actually be helpful at is making learning art more accessible to everyone. Art schools or private art classes can be expensive. This lowers the barrier to entry.
As for you using generated Art is that the it might be really beautiful but it will be very difficult to maintain that style and even more difficult to convince that it is your style. The Artist doesn’t get much recognition with LLM generated art. Using it as a critique also seems stupid because LLMs will aways try to give an objective view on it than subjective. Your art won’t trigger an emotion in it and might say it is bad or “do this to make it more understandable” — that’s where you lose as an artist.
My mom likes to paint as a hobby. What she does it searches stuff on Pinterest (which is mostly LLM Generated). She uses it as an inspiration to do it in her own style and maybe give it some spin. She keeps all of it for herself.
I’m a writer. I got paid to write on a few things here and there, but mostly there are just huge barriers for people without connections.
I plan on using AI to turn my writing into a visual animated format for people to consume. I don’t much care about the style of art, I just want my work to be seen. I can’t afford to pay for artists. If I could, I would. But at least, this would give me an opportunity to show my work without some execs saying no a hundred times.
When I look at the art for cartoons in the 70s/80s, there is so much crap animation with mistakes and duplications, you would think it’s “a.i. slop.” I understand that these were done overseas, pumped out quickly so quality control was overlooked for speed… but it wasn’t the animation I was interested in, it was the stories and characters.
I still think original artists will continue to exist. A.I. is just another tool. People will get bored of the same old stuff and want originality. I really hope it’ll make our lives better in the long run, but we’re just in the weird middle stage of A.I. crawling before running.
I can’t afford to pay for artists
You can afford LLMs right now because all of the LLM companies are losing money on it. If they decide they want to make a profit, they will raise their prices significantly. So you still end up in the same situation. You don’t have much control on what an LLM spits out while with doing animation manually, you have total control or at-least sit with an actual animator to make it look how you envision it to be.
I plan on using AI to turn my writing into a visual animated format for people to consume.
What makes you think that people will respond the same way and in the same numbers to LLM generated animation than if it were crafted by an artist? I reckon that it will be much lower. I see it on youtube constantly. I watched a video about a topic, then I got recommended something related to it from a different channel. Guess what? The script and the animation were so damn similar and the shit they were spewing wasn’t even true in the end. Everything that both the channels made was slop. Sure they spit out more content than conventional methods and got a few thousand views each video and made decent money on it. But they aren’t gonna sustain for long if they want audience retention.
Since then I have been more mindful on what video I click on and going to the extent of disabling recommendations and watch history.
I have downloaded my own LLM that can be used on my own computer… So the only cost is electricity since I upgraded my computer before the prices went to shit. Newegg even gave me free RAM with the purchase of a motherboard so I lucked out on that. Storage is not an issue too since I got that back in 2024 knowing Trump would fuck everything up.
And no, people might not respond the same way to my work, but then again I’m not taking any work away from anyone else because then it would not even exist. If you want to fund me and the artist for our work, then okay. Show me the money.
One thing I’ve noticed is that I see many more people complain about slop than slop itself. It’s so annoying at this point that’s it’s making me go in the opposite direction. Hey everyone, slop here… Microsoft slop here… Use Linux Linux Linux. Slop slop slop. Sloppy joes. It’s like candlestick makers complaining to Nikola Tesla.
Another great example of how AI is just wreaking havoc on people’s brains.
- Wants to show an enticing product to execs, doesn’t want to invest in paying an artist
- realizes they have to have connections but doesn’t want to network
- wants recognition of their hard work, hasn’t sought out a community or collaboration but states “show me the money”
AI will fix everything for me! Slop doesn’t exist! (ignores the very article we’re in, any platform algorithm feed, the us president shit posting, all the slop that gets presented here). Go get em Nik, don’t let haters stop your brilliance.
A very extreme takeaway, but okay.
my own LLM that can be used on my own computer
May I ask how many B parameters does it have? Because the paradox over here is:
- if it is weak then you will be getting much much worse results than even the Big Models the corpos have (we don’t even know how much tbh), let alone the quality of an actual artist.
- If you have a respectfully powerful model then your PC might cost thousands of dollars (even by ignoring the price hikes) which eliminates the excuse to pay an actual artist.
Definitely not a big fan of it, but realistically speaking, it’s here to stay. It is wise for them to govern and regulate it rather than outright ban it. Especially with a project as big as this one, people will try. Saying that the responsibility falls on the human is definitely the right move.
any resulting bugs or security flaws firmly onto the shoulders of the human submitting it.
Watch Americans and their companies pull some mad gymnastics on proportioning blame for this
Well yea, it’s the human submitting the code, and using a tool known to be imperfect
Your comment is pretty dumb
At this point it’s 23 on -5 with opinions on that dumb comment sunshine
Because obviously the majority always right.
Linux kernel being written by Microsoft’s AI.
Microsoft needs to try to ruin Linux somehow, it can’t just hurt windows 11 with AI slop code, it needs to expand it’s efforts to other systems.
which is trained on free and open source code
That will definitely not introduce some weird things when it starts feeding on itself.
Maintainers’ only responsibility is to ensure quality and shouldn’t have to check for rogue AI submissions.
Tho I still miss consistent fucking weather so year of the netbsd?
Ensuring you don’t approve garbage, either human or AI generated, is part of quality
AI is here, another tool to use…the correct way. Very reasonable approach from Torvalds.
I don’t have a problem with LLMs as much as the way people use them. My boss has offloaded all of his thinking to LLMs to the point he can’t fix a sentence in a slide deck without using an LLM.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
This is a big point. People need to understand that the LLMs are more like a fancy graphing calculator; they are very good and handle multiple things, but its on you to understand why the calculation is meaningful. At a certain point no one wants to see your long division or factorial. We want the results and for students and professionals to focus on the concept.
I get the metaphor but it’s not a great one for AI in mathematics especially. A statistical word generator is not going to perform reliable math and woe to anyone who acts otherwise.
I would call it an autistic sycophantic savant with brain damage. It’s able to perform apparent miraculous feats of memory and creativity but then be unable to tell reality from fiction, to tell if even the simplest response is valid, and likely will lie about it to make itself seem more competent to please you.
If you have a use for an assistant like that, then great. But a calculator - simple and cheap and reliable - it definitely is not.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
That seems to general. Im a mobile developer and sometimes I need a simple script outside my knowledge area. I needed to scrape a website recently, not for anything serious, but to save me time. Claude wrote it and it works. Its probably trash code, but it works and it helped. But you wouldn’t want me using Claude to do important work outside my specific area of focus either or im sure Id cause problems.
I’m also a mobile app dev and at my workplace they’re having non-mobile devs submit code to my codebases totally vibed with no understanding behind it. It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
So yes basically you’re right. If people only used it to learn and do initial code review passes and other reasonable things we’d be totally fine. But that’s unfortunately not the reality 🙈
It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
The next step is, CEO, look at how good these non-mobile devs are, they’re submitting 10x the commits to the mobile repo than boraginoru our mobile dev! We should fire him and just let the backend devs keep vibe coding it!
I’m talking about people that are accountants that now thing they can create software. Or engineers who think they can now write legal briefs for court.
Very frustrating for sure. Like any tool, it’s up to humans to know when the tool is useful.
Partly a marketing issue.
Companies keep advertising their new AI’s as destroyers of worlds, and something that’s too dangerous to even release.
As with anything else, the average user will not have but the most surface level understanding of the tool
Clickbait got me. No mention of “Yes copilot” which I assumed was a joke anyway.
👆🏻true
Copilot? You mean the AI with terms of service that are in bold and explicit: “for entertainment purposes only”?
Which is why its in the title and not the article? EntertainBait?
I suppose GitHub Copilot is meant, which is a different thing.
Different how, isn’t github owned by microsoft ?
There are like 70 copilots
The hell. How can they expect people to understand ? They plan to sell 100 things under the same name and try to sell it as one big AI when it is hundred of différents things unrelated ?
They’ve never been good at naming things, but they now seem to be going out of their way to try to be the worst with the names of their software. For instance, they named the successor to the already generically named “remote desktop protocol” “windows app”.
This one is funny. Go google windows app commands. They just fucked sysadmins
Most of those are bundled, no one is buying copilot fot OneNote they just get it when the get the rest of that suite.
Ok, so there are 70-81 copilots, github is one of them.
Why is github copilot a different thing in the context of the reply that was being responded to ?
Copilot is the harness, Claude and GPT are the models
Copilot is by far the worst harness of all the major players
Yes, i get that, copilot is like opencode or cursor, though perhaps with less general access to models.
There was a reply
Copilot? You mean the AI with terms of service that are in bold and explicit: “for entertainment purposes only”?
followed by
I suppose GitHub Copilot is meant, which is a different thing.
i was asking why github copilot is different in that context.
Different in that it’s not an AI model, it’s just a tool you can use to run AI models like Claude.
see my reply here
Just legal stuff. Making a huge deal of it is dumb
I disagree.
Legal stuff would be Use at your own risk, or answers may not be correct.
This is really strong language.
Bad actors submitting garbage code aren’t going to read the documentation anyway, so the kernel should focus on holding human developers accountable rather than trying to police the software they run on their local machines.
“Guns don’t kill people. People kill people”
Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.
The author should elaborate on how exactly AI is like “a specific brand of keyboard”. Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like “a specific brand of keyboard”, does that mean my brain is also a “specific brand of keyboard”?
I get their point. If you want to create good code by having AI create bad code and then spending twice the time to fix it, feel free to do that. But I’m in favor of a complete ban.
The (very obvious) point is that this cannot be enforced. So might as well deal with it upfront.
The keyboard thing is sort of a parable, it is as difficult to determine if code was generated in part by AI as it is to determine what keyboard was used to create it.
AI is a useful tool for coding as long as it’s being used properly. The problem isn’t the tool, the problem is the companies who scraped the entire internet, trained LLM models, and then put them behind paywalls with no options to download the weights so that they could be self-hosted. Brazen, unaccountable profiteering off of the goodwill of many open source projects without giving anything back.
If LLMs were community-trained on available, open-source code with weights freely available for anyone to host there wouldn’t be nearly as much animosity against the tech itself. The enemy isn’t the tool, but the ones who built the tool at the expense of everyone and are hogging all the benefits.
Eh, trust me, anti AI people don’t think this much about it
Also, there are a lot of open weight models out there that are pretty good
There are hundreds of such LLMs with published training sets and weights available on places like HuggingFace. Lots of people run their own LLMs locally, it’s not hard if you have enough vram and a bit of patience to wait longer for each reply.
You’re the one comparing AI and guns/killing people, and then saying their metaphorical comparison isn’t accurate? Lol
Wooting and Razer had a macro function that allowed Counterstrike players to setup a function to always get counter strafe. Valve decided that was a bridge too far and banned “Hardware level” exploits.
So, Valve once banned a keyboard.
Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.The author should elaborate on how exactly AI is like “a specific brand of keyboard”. Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like “a specific brand of keyboard”, does that mean my brain is also a “specific brand of keyboard”?
It’s about the heritage of code not being visible from the surface. I don’t know about your brain.
Last I checked a keyboard only enters what I type
I’ve had (broken) keyboard “hallucinate” extra keystrokes before, because of stuck keys. Or ignore keypresses. But yeah, that means the keyboard is broken.
Last I checked a keyboard only enters what I type
I’m assuming the author is talking about mobile keyboards, which have autocomplete and autocorrect.
Out of curiosity how much code have you contributed to the Linux kernel?
I’d still be highly sceptical about pull requests with code created by llms. Personally what I noticed is that the author of such pr doesn’t even read the code, and i have to go through all the slop
Ya I’m finding myself being the bad code generator at work as I’m scattered across so many things at the moment due to attrition and AI can do a lot of the boilerplate work, but it’s such a time and energy sink to fully review what it generates and I’ve found basic things I missed that others catch and shows the sloppiness. I usually take pride in my code, but I have no attachment to what’s generated and that’s exposing issues with trying to scale out using this
Same. There’s reduction in workforce, pressure to move faster, and no good way to do that without sloppiness. I have never been this down on the industry before; it was never great, but now it’s terrible.
Some thought I had the other day: LLM is supposed to make us more productive, say by 20%. Have you won a 20% pay rise since you adopted it? I haven’t
Increases in productivity go to the owners, not the workers. Even imaginary increases in productivity.
Just fucking stop using it? Wtf? Tell you boss to pound sand! They’re going to blame you when it goes south anyway so you might as well stay honest.
I suspect the answer will be that such large requested as you frequently see with LLM codegen will just be rejected.
Already I see changes broken up and suggested bit by bit, so I presume the same best practice applies.
Did we all forget about stackoverflow?
Peopleblindly copy/pasted from there all the time.
Couple of years back I got a PR at work that used a block of code that read a CSV, used some stream method to covert it to binary to then feed it to pandas to make a dataframe. I don’t remember the exact steps it did, but was just crazy when pd.read_csv existed.
On a hunch I pasted the code in google and found an exact match on overflow for a very weird use case on very early pandas.
I’m lucky and if people send obvious shit at work I can just cc their manager, but I fell for the volunteers at large FOSS projects, or even paid employees.
Yeah people have not understood their code for centuries now
Ah, the solution that recognizes there’s no way to eliminate AI from the supply chain after it’s already been introduced.
You make it sound as if there was another choice if just people had better principles. Pray tell us, what would you have done, now. Not in the past, now.
That wasn’t my intent. This is me saying, “of course that’s what they’re going to do because there’s nothing else they can do.”
I completely misunderstood you. I’m sorry.
You’re agreeing with the comment you replied to. Why the fuck are you trying to be so smug???
I agree. If AI becomes outlawed, it will simply be used without other people knowing about it.
This approach, at least, means that people will label AI-generated code as such.
Maybe. There’s still strong disapproval around it. I can imagine many will still hide it.
“yes to copilot no to AI slop” lol lmfao
There are so many reasons not to include any AI generated code.
I don’t understand the full picture here, but the person who is submitting AI slop will be held accountable. Never a company.
So if a company is pushing staff to us AI to complete projects faster and their code ends up being AI slop when submitted, only the person working for the company will be held responsible.
I’m not sure what the repercussions are here but hopefully it’s not a large fine. Those fines could add up quick if the person is submitting code all the time and doesn’t know they are messing up.
Which fines, this is just an internal rule in an organization.
At most can be rightfully banned from contributing
It someone is contributing with code that doesn’t really understand, then shouldn’t contribute
Ah okay got it now. Thanks. I didn’t understand it all the way. My comment is irrelevant
This is a bad move. The GPL license cannot be enforced on AI generated code.
Thats not true. The new article being shoved down lemmy’s throat is not correct. They site court cases and come to bad conclusions
Ok, well here are quotes from the US Copyright Office that establish that what I said is true:
https://sciactive.com/human-contribution-policy/#More-Information
The copyright office never saif gpl could not be enforced. Thats s conclusion made. Hell even in what you linked the requirement for this is that ai had to be a “substantial” part. The linux teamsaid they would take submissions that were assisted but not all out generated. But to argue a point, lets pretend that an entire pull request was ai generated. That is only a small part of thr linux kernel since the kernel is what is licensed. A sma amount of uncopywrited code cant invalidate the whole project, which the license is on.
But regardless, the copyright office never said anything about enforcement of gpl. T very clear said code with no meaningful human involvement, which isnt the case here. So nothing establishes what you said true. Its all leaping to comclusions that cant be leaped to.
The copyright office said material generated by AI is not copyrighted, even if that material is subsequently revised by the AI through additional prompts. That includes code. The GPL can only be used on copyrighted code. It is a copyleft license because it uses copyright law as a mechanism to enforce its terms. If you believe you can enforce a license on public domain material, that’s simply a gross misunderstanding of copyright law.
Yes, it will hopefully be a very small part of the kernel, but what happens thirty years from now if the kernel is all AI generated code? It may be a slippery slope, but it’s a valid slippery slope. The more the kernel is AI generated, the less of it the license can cover.
AI generated code cannot be copyrighted, can it? Then it can be relicensed as GPL.
In order to “license” a work, you need to own the copyright.
The status of generated code is ‘uncopyrightable’, which can be licensed.
Copyright law determines the copyright status and contract law enforces the terms of contracts. They are two separate issues.
If someone licenses you to use their AI generated code and you violate the license agreement, it doesn’t matter that they don’t have a claim under copyright law. They have a claim under contract law due to you violating the terms of the license (which is a contract).
The GPL is not a contract.
That is the FSF’s position, but the case law has examples of cases where it was allowed to be treated by a contract.
SFC v. Vizio, the Software Freedom Conservancy sued Vizio as a third-party beneficiary of the GPL as a contract, and the court allowed the case to proceed on that theory.
Because in that case the copyright holder is the arbitrator of the terms under which their copyrighted material can be used and reproduced. If they did not own the copyright then any “license” would not be worth the paper it was written on and no judge would allow it to be treated as an implicit contract.
You’re right, I misread the context (I was trying to carry on multiple simultaneous conversations).
My apologies.
Distributing under the GPL is a software license agreement which is absolutely a contract:
A software license agreement is a legal contract that grants you permission to use software without transferring ownership. The software creator retains intellectual property rights while giving you specific usage rights under defined terms and conditions.
- https://ironcladapp.com/journal/contracts/software-license-agreement
Sure, you can license it whatever you want, but I can too, because it’s public domain. And neither of us can enforce those license terms on the other, because again, it’s public domain.













