At my company, the GenX programmers want to force the new hires to learn to code and debug before they’re allowed to use AI. The newbies meanwhile are clamoring for AI. Management gave them access, so we expect they’re development to be hindered. At least they can write more bugs per week now. Their sloc metrics are probably better than us experienced folks because we don’t trust the AI at all. Management will probably layoff every one that knows how to fix bugs soon.
look on the bright side. once the bubble bursts and jobs open back up, we can basically ask for whatever salaries we want.
I’m aiming for at least $200k. if not that then I’m getting some nonreturnable class A shares for my troubles.
You forgot the best part. None of those Jr.s are going to learn a thing. Their skills are going to regress. We will draw a line and anyone who learned how to code before Ai and those that learned after will be on distinct sides. The skill gap between those sides will be incredible. The code bases right now have people that understand large parts of it and they can make design decisions based on their context. When people offload that work to Ai, which can only handle a tiny bit of context compared to a human, nobody will understand code bases anymore and it will be chaos.
The skill gap between those sides will be incredible.
Yes. And we see this skill gap between folks who learned to code before web frameworks, vs after, as well.
Yes, and it is so frustrating. Last week I was tearing into a stack dump from a crash and one of the entry level kids was watching me. I immediately identified a bad pointer and walked the stack back to the function where it originated and determined that the pointer array index was out of bounds. I might as well have been practicing witchcraft. He had no sense of what a valid address looks like, nor did he understand why that bad address would lead to a bus fault that would throw an exception. The best thing about this particular kid is that he listens and learns. He still wants to code with AI, but he knows the geezers have skills he needs. Probably my favorite among our current crop.
When I came out of school, I had experience in multiple assembly languages, operating system theory, compilers, and computer architecture. All areas where his knowledge is lacking. I am sure he knows lots of things I don’t, but I haven’t done a great job of identifying areas where those skills are applicable. I am pleased with his willingness and aptitude to learn. He’ll be fine, but I don’t have that confidence in a lot of them.
(I should remember this post when I have to write performance feedback for him.)
Was AI an overhyped application of statistics and not the magical construct to all of us becoming billionaires overnight?
Nah, people must be sabotaging it!
¿Por qué no los dos?
I mean, tbh, I’m doing everything I can to bring attention to its shortcomings at my workplace. I think there are a lot of us.

I’m impressed they wrote that whole article without going into the story about luddites. Or maybe I think about luddites too much…
Ah, the report as linked to is from “writer.com” aka Writer Inc. a “generative artificial intelligence company based in San Francisco”. To nobody’s surprise there’s a lot of em-dashes in it.
The biggest crime is perhaps that the whole PDF report is just pictures. You can’t highlight any text or search in it.
Boomer and Gen X middle-managers watching their AI rollouts fail because the technology’s efficiency and benefits have been vastly oversold.
“Clearly, the Zoomers are sabotaging us.”
We have a new AI team at work to find ways we can use AI for our work.
I cannot think of a single thing for what we do (manage data center hardware). We don’t confure it in any meaninful way that might maybe be useful.
Like we have these once a month logs we process, maybe that, but I already wrote and distributed an app (like ten years ago) that does that because its simple data processing X to Y. That already takes 5 minutes to do now.
5 minutes to analyse a month of logs?
Either that’s the most efficient log parser I’ve ever seen or you don’t log very much 😅
If you’re just looking for something specific, even command line tools can be hundreds of times faster than general data processing applications.
Its very specific data that its looking for, transmit and recieves for EAS, and where it came from.
hyper-personalize what stock can go into DR or off bleeding edge based on known EOL dates and maintenance cycles from OEMs?.
EOL dates? LOL, we have servers that are like 15 years old now.
pft. I pushed middle management to millenials. Even young boomers are retired now.
People keep spreading this…
Because they’re not smart enough to realize it’s pro-AI propaganda put out by AI companies…
A new report published Tuesday from enterprise AI agent firm Writer and research firm Workplace Intelligence finds a significant share of employees are actively trying to sabotage their company’s AI rollout. The report—a survey of 2,400 knowledge workers across the U.S., the U.K., and Europe, including 1,200 C-suite executives—found 29% of employees admit to sabotaging their company’s AI strategy. That number jumps to 44% among Gen Z workers
They need an excuse for why it’s not working, so they’re blaming jr workers, knowing ceos will come to the conclusion “just fire more people”.
Even the way they’re phrasing this, makes it sound like the only reason an employee doesn’t like AI, is they’re a “hater” scared of losing their job.
Do people legitimately not understand any of this? It seems incredibly obvious but this is like the 20th article I’ve and I don’t why people keep spreading this shit
Yes, this tendency is really dangerous in my opinion.
It’s not about looking for a scapegoat yet. Its about CEOs actually not understanding why it’s not working.
I have such a situation at my work. All the top management know ai only at a level where it seems everything is possible. It’s a beautiful level, I remember being at that level, so nice. For a while I tried to explain where the limits are, but I was dismissed as a naysayer every time. So I adapted and decided to kind of get back on that train officially, but route most of my work to where it makes sense.
currently going through this at my small company. the owners seem to think it’s great - one of them has been playing around with it creating various tools for the past couple years. to be fair, the last thing he’s been working on has actually been rather impressive. the other guy only just started using it and I think he’s in the honeymoon phase. still, it’s a bit worrying.
I’ve asked when I can get access to the same tools, and it hasn’t been rolled out to the teams yet. but from what I’ve seen, the actual use cases for us (consolidating standards documents, pulling out information from standards documents, creating spec sheets and requirements documents etc) it is not really worth it, since everything has to be validated anyways
from my perspective of not being able to use the same tools myself, it still seems like just a search engine to me. a better ctrl+f. which isn’t to say it’s a bad tool, though definitely an inefficient one.
Its about CEOs actually not understanding why it’s not working.
Half the respondents are from the c-suite…
And the question asked wasn’t “are you doing this” it’s “do you believe people are doing this”.
I literally quoted it because I knew people still wouldn’t read the source, but here we are.
I didn’t read the article and am grateful for the context :D <3
I would be curious how they phrased the questionnaire and how it is being interpreted. Surely they didn’t have a question, “Are you trying to sabotage AI?” Must have been something more benign that was modified in meaning by the marketers.
Probably something like “Have you used AI tools to help develop efficiency at your job?”
And people say no, because they have no use for it, so it gets interperated as “sabotage”.
deleted by creator
I would be curious
Really?
Most people would have then follow the bread rumbs and checked?
Why did you care enough to type that out, when clicking two links to find the answer was so easy and you could have found out immediately?
When I click the link, I see one sentence, a video that doesn’t load, an ad, and a demand to subscribe.
Are they really sabotaging it, or is it just not working as promised?
Shhh, you’re gonna ruin the propaganda!
Why not both?
Depending on the application and the way it is used, AI absolutely works as promised.
But those instances are few and far between and the general hype and expectations are far and above yield.
As someone who uses AI daily to produce faster and better, I absolutely hope the bubble pops and shit collapses.
“The AI output is always so shitty… the workers must be the problem, they’re clearly sabotaging our obviously perfect way to make perfect profits!”
The problem is that LLM’s sometimes gets the right answer and then you are like Wow this is the best! And the next minute you are thinking It must be me not giving enough context? Let me try a different model. which then also fails.
Intermittent reinforcement conditioning is literally the most powerful there is.
People who are not aware of their biases are mastered by them.
Stay curious folks!
Sabatoging AI strategy = getting things done the way that works instead of the top down directive that doesn’t
A tale as old as time

AI is undependable at best and kills people at worst. It’s a horrible product.
It’s so damn good at writing code though
Okay, Microsoft exec
Absolutely not anything by Microsoft that’s for sure. Local models are great for me
Which model have you been using recently, I haven’t been able to find any local llms that are good acoding so far, gemma4 was alright but abt half the time it would just literally go crazy and start repeating a sentence
Good for Gen Z.
Nobody sabotaging anything that manages to shit itself half a step into any task.
Bullshit















