MangoCats, mangocats@feddit.it

Instance: feddit.it
Joined: a year ago
Posts: 2
Comments: 715

RSS feed

Posts and Comments by MangoCats, mangocats@feddit.it

The bottom line for me is: it finds issues. More issues than typical human code reviews find. Like human code reviews, some of the issues it finds are trivial, unimportant, debatable whether “fixing” them is actually improving the product overall. Also like human code reviews sometimes it finds things that look like issues that really aren’t when you dig into the total picture. Then, some of the issues it finds are real, some are subtle like actual memory leaks, unsanitized inputs, etc. and if you’re going to ignore those, you’re just making worse software than is possible with the current tools.

Also, unlike most human code reviews, when it finds an issue it can and will do a thorough writeup explaining why it believes it is an issue, code snippets in the writeup, links into the source, proposed fixes, etc. All that detail is way too much effort to be a productive use of a human reviewer’s time, but it genuinely helps in the evaluation of the issue and the proposed fix(es).

Just like human code reviews, if you just accept and implement every thing it says without thinking, you’re an idiot.


You know what AI agents can help accomplish faster, with fewer human resources, than previous tools?

  • cleanup: Review this code for technical debt, report. Plan and implement fixes to address (selected portions of reported) tech debt.

  • refactor: Review this code for DRY and SSOT opportunities. Plan and implement…

  • Architectural Design - yeah, I’m not on a good footing with how to leverage the current tools for good architectural design. They are good, however, at tech stack selection - comparisons of various options, including architectural options. They’re not always great at following architectural designs when the system gets too complex to keep the whole architecture in context while designing. Much like human designed systems, they work better if you can modularize and keep each module a manageable size, building tree-style to form the larger system.

  • poor understanding of the code by developers. Yeah, any code not written by me is hard to understand, and any code written by me is hard for others to understand. “Me” being the vast majority of developers I have ever worked with. At least agents will comment their code and write somewhat comprehensive documentation when you ask them to.

  • management/planning failures more than anything. - the strongest tool I have found for AI development is to have the agents make plans. Review those plans, or not, but have them make a plan then have them implement the plan then have them review the implementation against the plan and point out discrepancies / shortcomings. The worst behavior AI agents had (a few months ago, they’re getting better) was to do some fraction of what you tell them to, then say - effectively “ALL DONE BOSS! What’s next?” What’s next is to go back to the written plan and make sure it’s complete. I think, again, they lose sight of the plan as their context window overflows, so you have to keep reminding them to re-read it. Management.

  • the team being so large no one can stay on top of things, this is very familiar turf when dealing with limited context windows in AI agents.

  • too much time passing since anyone has looked at or changed parts, this is something AI agents don’t suffer from - they have “the eternal sunshine of the spotless mind” you are introducing them to the project fresh with every new context window. Hopefully you are simulataneously developing a tree-form documentation set with which they can easily navigate to the parts of the project they need to focus on and get “up to speed” for the new tasks at hand (which should include: maintenance of the documentation.)

  • When you use AI tools the code base grows very quickly, only if you let it.

  • too quick to really comprehend, thus: the documentation - which AI agents aren’t too bad at writing.

  • you get shitty architecture to go along with it, only when you allow it.

I’ve seen a lot of “10x PRODUCTIVITY!!!” claims, and when you move at those speeds you’re going to encounter exactly the problems you describe. If you move more deliberately, as if you are managing a revolving door team of consultants, have the discipline to manage the architecture design and documentation, the implementation documentation, the unit and integration tests, etc. some may argue that it’s easier to do it by hand - in some cases it may be - but I feel like we’re at a point where you might expect more like a 3x productivity boost using AI agents vs not using AI agents with the bonus that: when you use AI agents you get the artifacts of disciplined development that you’re going to hear your human team bitch and moan about how “doing all that” (unit tests, docs) is slowing us down by 50-80%!!! so the humans tend to skimp in those areas whereas AI doesn’t complain at all when you task it with the 14th round of unit test coverage evaluation, refinement and expansion.

  • You’re just speedrunning enterprise software or spending all your time reviewing slop code.

When’s the last time you used an AI agent to write a significant chunk of code? https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/

  • It’s like a drug, the first time it does something fast and well you feel it’s so great, and that’s a problem… if you’re going to party with cocaine you’re going to need some serious discipline to hold down a day job at the same time.

  • and can only ever suck. The world changes. The world of AI code development has changed significantly over the past year. A year ago I called it “cute, interesting potential, practically useless.” 6 months ago the improvements were so dramatic I decided I needed to get a handle on it - yeah, it was limited in complexity capability and did make a lot of slop, but it was so far ahead of where it was 6 months prior… Today, it’s not perfect, but it’s a lot better than it was 6 months ago, and while you can make a lot of slop with it, you also can keep a leash on it and clean up the slop while still making super-human forward progress.

  • Worst case you spend all your time wrestling with it and never get a finished product. - just like working with human teams.


the energy costs for the restaurant to operate, maintain their structure, maintain the roads for you and the employees to get to and from, costs of employee transit

that would be split though? a bit like a bus or train, average costs to produce the meals would be lower as they’re made in bulk

I find money to be a really good proxy for carbon emissions. The more you pay for something, the more people get money that they eventually spend on energy. So, if you go to a restaurant and pay $100 for the meal, don’t focus on where the potatoes were grown or whatever, focus on the $100 - where does it go? Most of it ends up paying for the expenditure of energy eventually.

https://www.bankaust.com.au/about-us/conservation-reserve

Yes, but like the restaurant where you are just a pro-rated slice of “the bad things”, schemes like this make you a pro-rated slice of “the good things.” It’s not a bad thing, but it’s not as big of a good thing as it sounds by the time 10,000 people share in its “goodness.”

Re: https://corenafund.org.au/

This: https://mangocats.com/ao/BeachVendorBlockchain.html is a half-baked idea I have been kicking around for use of blockchain tech as a backbone for small scale / local credit. Don’t go expecting the simulation servers to be running or anything like that, I just spent a weekend updating the old ideas with spare AI token credits recently and while it made tremendous progress, as the papers themselves say: it really needs hands-on time and attention to develop it properly.



I am technically climate positive (I reduce more emissions than I make)

by some measures. Look at the food you eat, cost of transport, energy costs to produce, maintain and eventually recycle your energy and dwelling infrastructure. Do you eat at restaurants? Not only what you consume there, but the energy costs for the restaurant to operate, maintain their structure, maintain the roads for you and the employees to get to and from, costs of employee transit…

You are making an effort, which is admirable and sadly rare. You are highly unlikely to be cradle to grave climate positive. We all emit CO2 which is balanced by natural processes from phytoplankton to rainforests to the African Violet on your windowsill. Unless you maintain and protect a large swath of nature (we used to own 20 acres of undeveloped forest… that didn’t completely balance our footprint, but it probably did more than ecosia…) you are party to more CO2 emission than carbon recapture.


I know a family that uses it to write sappy greeting card type mini-novels to each other.

Others use it to make graphic promotional flyers (give me a flyer describing this event, time place, activities and illustrate it with pictures of geckos and calla lillies…)

Some (bad) colleagues at work use it to summarize their e-mail inbox, and draft replies when warranted.

I know a woman who has been teaching “Creative Writing” at an online university for 10 years. Her past two crops of students are 99% using it to write their papers.

I’m sure there are day-traders out there using it all kinds of ways to try to gain advantage in the markets.

Customer support is a big one (reading the manuals to the customers) and, sadly, customer engagement - constant contact type messages.

A friend of ours recently took a 3 week vacation in Ireland, gave AI some “hard constraints” that they had to meet and let it plan out their routing, lodging, car rental, train fares, etc.


I glanced across one once, I doubt any of the demographics are entirely reliable, but even with the uncertainty it appears that programmers are but a small minority even among Claude users.


The seven line script isn’t burning down the rainforest. Programmers are a 1% slice of AI token usage. A seven line script takes less datacenter power to generate than your monitor does to show you this message thread while you read it. What’s burning down the rainforest is billions of “ordinary” people using AI to make Rule 34 animations, graphic layout flyers for their next coffee date, research papers on obscure topics that will never be read by anyone, schemes to trade bitcoin, etc. etc. etc.


maybe it’s just 3d printing 2.0

3D printing was awesome. Once or twice a year, it could make something for me that was genuinely useful. It also occupied a big chunk of space in the house and mostly created mostly useless plastic junk creating more clutter - we started 3D printing in 2018 and gave the printer away to someone with a bigger house in 2020. Once since then I have asked him to print something for me. It’s that useful.

AI agents are making software. I’ve got Terabyte flash drives smaller than a key fob - the clutter won’t be a problem in our house anytime soon. So far, Claude has helped me cleanup / fix several hobby software projects in amazingly short time to an amazingly high level of polish.


I used to code with pen and paper, in 1982. I’m a bit faster with the modern tools, even if they crash occasionally. Except maybe Eclipse, me thinks it doth crash too much.


Sometimes work requires images. Claude is pretty awesome at making .svg files illustrating - pretty much whatever you can describe.



I don’t need Claude to create, the ability has always been in me - but it comes out much more slowly without tools that assist me, whether that’s books with example code, websites that document APIs, community sites that discuss problems and solutions, web searches that bring me reference material related to what I’m doing, or AI agents which propose formal requirements and code that implements those requirements complete with tests.

It’s all my “creativity” - but a lot of professional programming more resembles painting a house than a still-life canvas. Painting a house using tiny art brushes is possible, but it takes a lot longer than using a spray-gun.


Both. So, if you - or other meatbags - write the code, AI agents can review it faster / more thoroughly (therefore: better) than human reviewers, finding more problems and proposing fixes.

If you accept every fix your reviewer proposes, human or AI, you’re an idiot. But if you ignore every bug a human or AI reviewer raises as a possibility, you’re an even bigger idiot. If your code has no bugs, you’re a liar.


I’m a C++ programmer most of the time, but I also write scripts. Claude and even Gemini write better bash scripts than I do - better error handling, better commenting, better input sanitizing, better use of parameters - because, all that bash syntax is an annoying pile of illogical junk that I have to look up every time I go to use it, and what are AI agents really fast at doing? Looking stuff up.


The key to successful use of AI is: use it to write better code, not more code. This means iterations - just like when humans write code.


If you’re reading a study performed 6, or even 3 months ago, you’re getting significantly stale data - it really is moving that fast.


Speed and quality are basically uncorrelated, except in the big picture where slower -> better -> fastest to a real solution for the whole system.

The reason slower leads to better is due to iterations of examination and consideration of more possible use cases. It’s not about wall clock time, it’s about understanding the problem. AI agents let you shoot yourself in the foot 10x faster, but they also help you work through the edge cases faster too - not 10x faster, because the agent can’t know the edge cases as well as a domain expert can, but still faster.


Remember, it was trained on Stack Overflow… a self-selected collection of “beginner” questions with a lot of “this is good enough to get you unstuck” answers. If you want better, you have to specify and test to ensure you get better.


The outages have been so brief, and the speed margin is so great now, that waiting for the outages is a trivial delay as compared to “doing it the old fashioned way.”


RSS feed

Posts by MangoCats, mangocats@feddit.it

Comments by MangoCats, mangocats@feddit.it

The bottom line for me is: it finds issues. More issues than typical human code reviews find. Like human code reviews, some of the issues it finds are trivial, unimportant, debatable whether “fixing” them is actually improving the product overall. Also like human code reviews sometimes it finds things that look like issues that really aren’t when you dig into the total picture. Then, some of the issues it finds are real, some are subtle like actual memory leaks, unsanitized inputs, etc. and if you’re going to ignore those, you’re just making worse software than is possible with the current tools.

Also, unlike most human code reviews, when it finds an issue it can and will do a thorough writeup explaining why it believes it is an issue, code snippets in the writeup, links into the source, proposed fixes, etc. All that detail is way too much effort to be a productive use of a human reviewer’s time, but it genuinely helps in the evaluation of the issue and the proposed fix(es).

Just like human code reviews, if you just accept and implement every thing it says without thinking, you’re an idiot.


You know what AI agents can help accomplish faster, with fewer human resources, than previous tools?

  • cleanup: Review this code for technical debt, report. Plan and implement fixes to address (selected portions of reported) tech debt.

  • refactor: Review this code for DRY and SSOT opportunities. Plan and implement…

  • Architectural Design - yeah, I’m not on a good footing with how to leverage the current tools for good architectural design. They are good, however, at tech stack selection - comparisons of various options, including architectural options. They’re not always great at following architectural designs when the system gets too complex to keep the whole architecture in context while designing. Much like human designed systems, they work better if you can modularize and keep each module a manageable size, building tree-style to form the larger system.

  • poor understanding of the code by developers. Yeah, any code not written by me is hard to understand, and any code written by me is hard for others to understand. “Me” being the vast majority of developers I have ever worked with. At least agents will comment their code and write somewhat comprehensive documentation when you ask them to.

  • management/planning failures more than anything. - the strongest tool I have found for AI development is to have the agents make plans. Review those plans, or not, but have them make a plan then have them implement the plan then have them review the implementation against the plan and point out discrepancies / shortcomings. The worst behavior AI agents had (a few months ago, they’re getting better) was to do some fraction of what you tell them to, then say - effectively “ALL DONE BOSS! What’s next?” What’s next is to go back to the written plan and make sure it’s complete. I think, again, they lose sight of the plan as their context window overflows, so you have to keep reminding them to re-read it. Management.

  • the team being so large no one can stay on top of things, this is very familiar turf when dealing with limited context windows in AI agents.

  • too much time passing since anyone has looked at or changed parts, this is something AI agents don’t suffer from - they have “the eternal sunshine of the spotless mind” you are introducing them to the project fresh with every new context window. Hopefully you are simulataneously developing a tree-form documentation set with which they can easily navigate to the parts of the project they need to focus on and get “up to speed” for the new tasks at hand (which should include: maintenance of the documentation.)

  • When you use AI tools the code base grows very quickly, only if you let it.

  • too quick to really comprehend, thus: the documentation - which AI agents aren’t too bad at writing.

  • you get shitty architecture to go along with it, only when you allow it.

I’ve seen a lot of “10x PRODUCTIVITY!!!” claims, and when you move at those speeds you’re going to encounter exactly the problems you describe. If you move more deliberately, as if you are managing a revolving door team of consultants, have the discipline to manage the architecture design and documentation, the implementation documentation, the unit and integration tests, etc. some may argue that it’s easier to do it by hand - in some cases it may be - but I feel like we’re at a point where you might expect more like a 3x productivity boost using AI agents vs not using AI agents with the bonus that: when you use AI agents you get the artifacts of disciplined development that you’re going to hear your human team bitch and moan about how “doing all that” (unit tests, docs) is slowing us down by 50-80%!!! so the humans tend to skimp in those areas whereas AI doesn’t complain at all when you task it with the 14th round of unit test coverage evaluation, refinement and expansion.

  • You’re just speedrunning enterprise software or spending all your time reviewing slop code.

When’s the last time you used an AI agent to write a significant chunk of code? https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/

  • It’s like a drug, the first time it does something fast and well you feel it’s so great, and that’s a problem… if you’re going to party with cocaine you’re going to need some serious discipline to hold down a day job at the same time.

  • and can only ever suck. The world changes. The world of AI code development has changed significantly over the past year. A year ago I called it “cute, interesting potential, practically useless.” 6 months ago the improvements were so dramatic I decided I needed to get a handle on it - yeah, it was limited in complexity capability and did make a lot of slop, but it was so far ahead of where it was 6 months prior… Today, it’s not perfect, but it’s a lot better than it was 6 months ago, and while you can make a lot of slop with it, you also can keep a leash on it and clean up the slop while still making super-human forward progress.

  • Worst case you spend all your time wrestling with it and never get a finished product. - just like working with human teams.


the energy costs for the restaurant to operate, maintain their structure, maintain the roads for you and the employees to get to and from, costs of employee transit

that would be split though? a bit like a bus or train, average costs to produce the meals would be lower as they’re made in bulk

I find money to be a really good proxy for carbon emissions. The more you pay for something, the more people get money that they eventually spend on energy. So, if you go to a restaurant and pay $100 for the meal, don’t focus on where the potatoes were grown or whatever, focus on the $100 - where does it go? Most of it ends up paying for the expenditure of energy eventually.

https://www.bankaust.com.au/about-us/conservation-reserve

Yes, but like the restaurant where you are just a pro-rated slice of “the bad things”, schemes like this make you a pro-rated slice of “the good things.” It’s not a bad thing, but it’s not as big of a good thing as it sounds by the time 10,000 people share in its “goodness.”

Re: https://corenafund.org.au/

This: https://mangocats.com/ao/BeachVendorBlockchain.html is a half-baked idea I have been kicking around for use of blockchain tech as a backbone for small scale / local credit. Don’t go expecting the simulation servers to be running or anything like that, I just spent a weekend updating the old ideas with spare AI token credits recently and while it made tremendous progress, as the papers themselves say: it really needs hands-on time and attention to develop it properly.



I am technically climate positive (I reduce more emissions than I make)

by some measures. Look at the food you eat, cost of transport, energy costs to produce, maintain and eventually recycle your energy and dwelling infrastructure. Do you eat at restaurants? Not only what you consume there, but the energy costs for the restaurant to operate, maintain their structure, maintain the roads for you and the employees to get to and from, costs of employee transit…

You are making an effort, which is admirable and sadly rare. You are highly unlikely to be cradle to grave climate positive. We all emit CO2 which is balanced by natural processes from phytoplankton to rainforests to the African Violet on your windowsill. Unless you maintain and protect a large swath of nature (we used to own 20 acres of undeveloped forest… that didn’t completely balance our footprint, but it probably did more than ecosia…) you are party to more CO2 emission than carbon recapture.


I know a family that uses it to write sappy greeting card type mini-novels to each other.

Others use it to make graphic promotional flyers (give me a flyer describing this event, time place, activities and illustrate it with pictures of geckos and calla lillies…)

Some (bad) colleagues at work use it to summarize their e-mail inbox, and draft replies when warranted.

I know a woman who has been teaching “Creative Writing” at an online university for 10 years. Her past two crops of students are 99% using it to write their papers.

I’m sure there are day-traders out there using it all kinds of ways to try to gain advantage in the markets.

Customer support is a big one (reading the manuals to the customers) and, sadly, customer engagement - constant contact type messages.

A friend of ours recently took a 3 week vacation in Ireland, gave AI some “hard constraints” that they had to meet and let it plan out their routing, lodging, car rental, train fares, etc.


I glanced across one once, I doubt any of the demographics are entirely reliable, but even with the uncertainty it appears that programmers are but a small minority even among Claude users.


The seven line script isn’t burning down the rainforest. Programmers are a 1% slice of AI token usage. A seven line script takes less datacenter power to generate than your monitor does to show you this message thread while you read it. What’s burning down the rainforest is billions of “ordinary” people using AI to make Rule 34 animations, graphic layout flyers for their next coffee date, research papers on obscure topics that will never be read by anyone, schemes to trade bitcoin, etc. etc. etc.


maybe it’s just 3d printing 2.0

3D printing was awesome. Once or twice a year, it could make something for me that was genuinely useful. It also occupied a big chunk of space in the house and mostly created mostly useless plastic junk creating more clutter - we started 3D printing in 2018 and gave the printer away to someone with a bigger house in 2020. Once since then I have asked him to print something for me. It’s that useful.

AI agents are making software. I’ve got Terabyte flash drives smaller than a key fob - the clutter won’t be a problem in our house anytime soon. So far, Claude has helped me cleanup / fix several hobby software projects in amazingly short time to an amazingly high level of polish.


I used to code with pen and paper, in 1982. I’m a bit faster with the modern tools, even if they crash occasionally. Except maybe Eclipse, me thinks it doth crash too much.


Sometimes work requires images. Claude is pretty awesome at making .svg files illustrating - pretty much whatever you can describe.



I don’t need Claude to create, the ability has always been in me - but it comes out much more slowly without tools that assist me, whether that’s books with example code, websites that document APIs, community sites that discuss problems and solutions, web searches that bring me reference material related to what I’m doing, or AI agents which propose formal requirements and code that implements those requirements complete with tests.

It’s all my “creativity” - but a lot of professional programming more resembles painting a house than a still-life canvas. Painting a house using tiny art brushes is possible, but it takes a lot longer than using a spray-gun.


Both. So, if you - or other meatbags - write the code, AI agents can review it faster / more thoroughly (therefore: better) than human reviewers, finding more problems and proposing fixes.

If you accept every fix your reviewer proposes, human or AI, you’re an idiot. But if you ignore every bug a human or AI reviewer raises as a possibility, you’re an even bigger idiot. If your code has no bugs, you’re a liar.


I’m a C++ programmer most of the time, but I also write scripts. Claude and even Gemini write better bash scripts than I do - better error handling, better commenting, better input sanitizing, better use of parameters - because, all that bash syntax is an annoying pile of illogical junk that I have to look up every time I go to use it, and what are AI agents really fast at doing? Looking stuff up.


The key to successful use of AI is: use it to write better code, not more code. This means iterations - just like when humans write code.


If you’re reading a study performed 6, or even 3 months ago, you’re getting significantly stale data - it really is moving that fast.


Speed and quality are basically uncorrelated, except in the big picture where slower -> better -> fastest to a real solution for the whole system.

The reason slower leads to better is due to iterations of examination and consideration of more possible use cases. It’s not about wall clock time, it’s about understanding the problem. AI agents let you shoot yourself in the foot 10x faster, but they also help you work through the edge cases faster too - not 10x faster, because the agent can’t know the edge cases as well as a domain expert can, but still faster.


Remember, it was trained on Stack Overflow… a self-selected collection of “beginner” questions with a lot of “this is good enough to get you unstuck” answers. If you want better, you have to specify and test to ensure you get better.


The outages have been so brief, and the speed margin is so great now, that waiting for the outages is a trivial delay as compared to “doing it the old fashioned way.”