I pretty much always recommend throttling. It’s a very low severity issue generally, but of course it depends on the product. There might be some products where it is a very big deal
I pretty much always recommend throttling. It’s a very low severity issue generally, but of course it depends on the product. There might be some products where it is a very big deal
I wager that, for example, most people didn’t vote in california not because they see their candidate as a lost cause, but because they know “their” candidate has carried the state for sure.
That’s a natural interpretation as well. I wonder if it’d be possible to at least guess at whether it was that or “my person won’t win so what’s the point”. There are probably so many other factors. For example the “did not vote map” looks surprisingly similar to the SOVI map: https://www.atsdr.cdc.gov/place-health/php/svi/svi-interactive-map.html. I’m not entirely sure what to make of that, my knee jerk thought is that you could see more “what’s the point they’re both the same” or “neither side actually cares about my needs” among disenfranchised people in general combined with maybe more voter suppression efforts in disenfranchised areas? Would voting being a federal holiday or easier to vote by mail make those areas specifically better?
I’d be interested in an interactive version of this where you could assign a percentage of those votes to the person who lost the state as a naive proxy for “what would have happened if the people who thought their vote didn’t matter because [D|R] would win anyway”. I know it wouldn’t be an actual measure but it’d be fun to mess with anyway.
In particular I find it kinda interesting that CA and TX are both didn’t vote and both historically considered “easy wins”.
This image is just generally interesting because it also turns the idea of swing states around a bit. If neither candidate motivated enough people in all of those states could we consider them swing states?
I remember being a kid and the teachers just made it seem like the difference between desert and dessert was just so deeply important.


The coordinated strike had an immediate impact. Millions of people in Dubai and Abu Dhabi woke up on Monday unable to pay for a taxi, order a food delivery or check their bank balance on their mobile apps.
I honestly can’t tell if this paragraph is supposed to be satirical.
You’re suggesting that we replace THEM with an agent.
I am not suggesting we replace anyone, least of all the open source community, so let’s not put words in my mouth
I think the current code I see being generated is generally “good enough”. I’m not comparing it to perfect: I’m comparing it to people.
If this were true, then open source projects would have much less of an issue with pull requests from sloperators.
This doesn’t follow to me. A good tool in the hand of a crappy user doesn’t suddenly make good output. I specifically said that LLMs write good code in a specific setting. Clearly random person generating thousands of lines at a time for a project they don’t understand isn’t that setting.
You seem to be very focused on crappy code generated by people that don’t know what they’re doing, the technology isn’t good enough for that, so yes, it won’t work in that setting, I agree.
I’d push back on your point here with a few things:
The primary one being: the code doesn’t need to be perfect or even above average – average is perfectly fine. The idea here is comparing the AI to a human, not to perfection. I see this constantly with AI and I find it a bit disingenuous.
I do truly believe what I said above will be possible within my career (I’m in my mid 30s), but it’s not really what I’m worried about right now. I think the current code I see being generated is generally “good enough”. I’m not comparing it to perfect: I’m comparing it to people.
I read a comment once that still rings true - “Hallucinations” are a misnomer. Everything an LLM puts out is a hallucination; it’s just that a lot of the time, it happens to be accurate. Eliminating that last percentage of inaccurate hallucinations is going to be nearly impossible.
I don’t see any reason you have to remove all hallucinations to get a good tool for autonomous development: humans aren’t perfect either. We compensate for that with processes and checking each others work, but plenty still falls through the cracks.
LLMs also have no understanding of context outside the immediate. Satire is completely opaque to them. Sarcasm is lost on them, by and large. And they have no way to differentiate between good and bad output. Or good and bad input, for that matter. Joke pseudocode is just as valid in their training corpus as dire warnings about insecure code.
Have you seen output in which satirical code is actually included? I’m well aware of things like https://www.anthropic.com/research/small-samples-poison and the potential here. And do you not believe that either (a) these types of trivial issues would be caught by a person whose job was just to audit output or even (b) this type of issue could be caught by specially trained domain limited AIs designed to check output?
To your point then: what are your thoughts on this project? https://github.com/anthropics/claudes-c-compiler I’m not particularly interested in this use case right now but it seems more in line with what you’re interested in.
I think it shows a lot of limitations but also a lot of potential. I don’t personally think the AI needs to get the code perfect on the first go – it has to be compared to humans and we definitely don’t do that.
I really really dislike the way it’s being sold as a solution for things it’s in no way a solution for.
Yes, of course. I think it’s important to look passed the blowhards and think about what it’s actually doing: that is the perspective I’m trying to talk about this from.
I didn’t say “trust me bro” and showing Claude submissions is sufficient for analyzing code in the context I believe it is good: one file at a time and one task at a time. This is also the same realm that a human is good. You are welcome to look at the project as a whole to determine the “project quality” as well: it’s open source. But I’m not here to argue: I believe this tech that is barely in its infancy is already quite good and going to get better, and I’m already considering what it will do to my life. If you don’t, that’s fine.
I’ll add here that I find it very frustrating to talk about these “AI agents” and their code output, because it’s something we’re all close to and spent a lot of time learning. The concept of “a machine” getting “better than us” so quickly, with the background context of an industry that is chomping at the bit to replace humans makes these discussions inherently difficult and really emotional. I feel genuine sadness when I think about it. If the world were different we’d probably all be stoked. I don’t want the AI to be better than me, and I currently don’t believe it is, but I think:
I don’t think my job is currently on the chopping block today: I don’t do development I do security work. But I do think it will either be on the chopping block or fundamentally change sooner than I’m comfortable with.
My goal is to pay off my house and then accept a lower salary as a trade off for more fulfilling work.
Claude commits to GitHub with the same name no matter who uses it. You can see every single line of open source code it has written (for GitHub only of course): https://github.com/search?q=author%3Aclaude&type=commits&s=author-date&o=desc. Look around as you please, most of it is just fine.
People that I know to be good developers have also shared their experiences with it and say yes, it has written good code for them. I’ve personally used ChatGPT to generate very mundane tasks and the code it output was more than adequate.
It introduces security bugs and subtle bugs at probably the same rate as a human (I have no “citation” there, just what I’ve seen). It needs to be “driven” by a human, yes, but it’s not clear for how long it will need to be, and even if it always does, personally I don’t want my job to be to “drive an AI”.
Right, but I think we’re kidding ourselves if we don’t think it’s going to get better. I have no doubt it will be able to magically generate a new Linux kernel.
I have to track my time in 3 different system
Which circle of hell is this?
I’ve been working in tech for about 10 years now as well, and I’m also just feeling tired. I’m a bit sad, because I like my job. I didn’t study computer science or anything in college; I just got work in security because I enjoyed it. It’s sad pretty much knowing it won’t be the same. I don’t really want to offload a lot of the work to an AI in the future.
I’ve been getting more into learning to weld and work with wood. In the next few years I’ll probably consider starting a small custom furniture company.
I feel like this part of the conversation is drowned by the AI hype train and the AI hate train. The part where real people are seeing the real effects of a technology that is actually good, and is likely going to get better, and will have the potential for significant social damage to a large part of the middle class.
It really doesn’t suck at them. AI writes great code; I think we just want it to suck. It can’t magically generate a new Linux kernel, but the small tasks I’ve seen it do have all been mostly above average. (I have also seen some complete garbage, yes, mostly above average)
I think his ability to see the problems in the way he acted, and then actually act on it, even when a bunch of people encouraged it, is impressive.
Yea that made me laugh; I just updated my resume from LaTeX to typst a few months ago actually


I wouldn’t be surprised if this is actually what happened here… tech companies in general don’t delete data if they can avoid it. I worked for companies that would just set deleted = 1 in the DB on delete calls. Google has more ability than anyone else to put that data to use
The effect they’re talking about comes from the peppercorns, not the peppers. You heat them slightly in a pan and then grind them in a mortar and pestle. I run them through a fine strainer after that, but I dunno if you have to.
Yea, it doesn’t matter too much in most instances, but there are times when it might, especially if the URL itself has some meaning embedded in it. For example if part of the path is a SHA sum of some content, which is fairly common, it might be bad to allow someone to determine if that resource exists