• collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    17
    ·
    10 hours ago

    At my company, the GenX programmers want to force the new hires to learn to code and debug before they’re allowed to use AI. The newbies meanwhile are clamoring for AI. Management gave them access, so we expect they’re development to be hindered. At least they can write more bugs per week now. Their sloc metrics are probably better than us experienced folks because we don’t trust the AI at all. Management will probably layoff every one that knows how to fix bugs soon.

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      look on the bright side. once the bubble bursts and jobs open back up, we can basically ask for whatever salaries we want.

      I’m aiming for at least $200k. if not that then I’m getting some nonreturnable class A shares for my troubles.

    • SleeplessCityLights@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 hours ago

      You forgot the best part. None of those Jr.s are going to learn a thing. Their skills are going to regress. We will draw a line and anyone who learned how to code before Ai and those that learned after will be on distinct sides. The skill gap between those sides will be incredible. The code bases right now have people that understand large parts of it and they can make design decisions based on their context. When people offload that work to Ai, which can only handle a tiny bit of context compared to a human, nobody will understand code bases anymore and it will be chaos.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        The skill gap between those sides will be incredible.

        Yes. And we see this skill gap between folks who learned to code before web frameworks, vs after, as well.

        • collapse_already@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          Yes, and it is so frustrating. Last week I was tearing into a stack dump from a crash and one of the entry level kids was watching me. I immediately identified a bad pointer and walked the stack back to the function where it originated and determined that the pointer array index was out of bounds. I might as well have been practicing witchcraft. He had no sense of what a valid address looks like, nor did he understand why that bad address would lead to a bus fault that would throw an exception. The best thing about this particular kid is that he listens and learns. He still wants to code with AI, but he knows the geezers have skills he needs. Probably my favorite among our current crop.

          When I came out of school, I had experience in multiple assembly languages, operating system theory, compilers, and computer architecture. All areas where his knowledge is lacking. I am sure he knows lots of things I don’t, but I haven’t done a great job of identifying areas where those skills are applicable. I am pleased with his willingness and aptitude to learn. He’ll be fine, but I don’t have that confidence in a lot of them.

          (I should remember this post when I have to write performance feedback for him.)

  • Jankatarch@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    13 hours ago

    Was AI an overhyped application of statistics and not the magical construct to all of us becoming billionaires overnight?

    Nah, people must be sabotaging it!

  • RedSnt ♾️🦋♂️👓🖥️@feddit.dk
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    21 hours ago

    I’m impressed they wrote that whole article without going into the story about luddites. Or maybe I think about luddites too much…

    Ah, the report as linked to is from “writer.com” aka Writer Inc. a “generative artificial intelligence company based in San Francisco”. To nobody’s surprise there’s a lot of em-dashes in it.

    The biggest crime is perhaps that the whole PDF report is just pictures. You can’t highlight any text or search in it.

  • pjwestin@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    1 day ago

    Boomer and Gen X middle-managers watching their AI rollouts fail because the technology’s efficiency and benefits have been vastly oversold.

    “Clearly, the Zoomers are sabotaging us.”

    • RamenJunkie@midwest.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      13 hours ago

      We have a new AI team at work to find ways we can use AI for our work.

      I cannot think of a single thing for what we do (manage data center hardware). We don’t confure it in any meaninful way that might maybe be useful.

      Like we have these once a month logs we process, maybe that, but I already wrote and distributed an app (like ten years ago) that does that because its simple data processing X to Y. That already takes 5 minutes to do now.

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        5 minutes to analyse a month of logs?

        Either that’s the most efficient log parser I’ve ever seen or you don’t log very much 😅

        • Arcka@midwest.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 hours ago

          If you’re just looking for something specific, even command line tools can be hundreds of times faster than general data processing applications.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        hyper-personalize what stock can go into DR or off bleeding edge based on known EOL dates and maintenance cycles from OEMs?.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    141
    arrow-down
    1
    ·
    2 days ago

    People keep spreading this…

    Because they’re not smart enough to realize it’s pro-AI propaganda put out by AI companies…

    A new report published Tuesday from enterprise AI agent firm Writer and research firm Workplace Intelligence finds a significant share of employees are actively trying to sabotage their company’s AI rollout. The report—a survey of 2,400 knowledge workers across the U.S., the U.K., and Europe, including 1,200 C-suite executives—found 29% of employees admit to sabotaging their company’s AI strategy. That number jumps to 44% among Gen Z workers

    They need an excuse for why it’s not working, so they’re blaming jr workers, knowing ceos will come to the conclusion “just fire more people”.

    Even the way they’re phrasing this, makes it sound like the only reason an employee doesn’t like AI, is they’re a “hater” scared of losing their job.

    Do people legitimately not understand any of this? It seems incredibly obvious but this is like the 20th article I’ve and I don’t why people keep spreading this shit

    • NoiseColor @lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      5
      ·
      1 day ago

      It’s not about looking for a scapegoat yet. Its about CEOs actually not understanding why it’s not working.

      I have such a situation at my work. All the top management know ai only at a level where it seems everything is possible. It’s a beautiful level, I remember being at that level, so nice. For a while I tried to explain where the limits are, but I was dismissed as a naysayer every time. So I adapted and decided to kind of get back on that train officially, but route most of my work to where it makes sense.

      • mrgoosmoos@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        currently going through this at my small company. the owners seem to think it’s great - one of them has been playing around with it creating various tools for the past couple years. to be fair, the last thing he’s been working on has actually been rather impressive. the other guy only just started using it and I think he’s in the honeymoon phase. still, it’s a bit worrying.

        I’ve asked when I can get access to the same tools, and it hasn’t been rolled out to the teams yet. but from what I’ve seen, the actual use cases for us (consolidating standards documents, pulling out information from standards documents, creating spec sheets and requirements documents etc) it is not really worth it, since everything has to be validated anyways

        from my perspective of not being able to use the same tools myself, it still seems like just a search engine to me. a better ctrl+f. which isn’t to say it’s a bad tool, though definitely an inefficient one.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        3
        ·
        1 day ago

        Its about CEOs actually not understanding why it’s not working.

        Half the respondents are from the c-suite…

        And the question asked wasn’t “are you doing this” it’s “do you believe people are doing this”.

        I literally quoted it because I knew people still wouldn’t read the source, but here we are.

    • mineralfellow@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 day ago

      I would be curious how they phrased the questionnaire and how it is being interpreted. Surely they didn’t have a question, “Are you trying to sabotage AI?” Must have been something more benign that was modified in meaning by the marketers.

      • RamenJunkie@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        13 hours ago

        Probably something like “Have you used AI tools to help develop efficiency at your job?”

        And people say no, because they have no use for it, so it gets interperated as “sabotage”.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        20 hours ago

        I would be curious

        Really?

        Most people would have then follow the bread rumbs and checked?

        Why did you care enough to type that out, when clicking two links to find the answer was so easy and you could have found out immediately?

        • mineralfellow@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          12 hours ago

          When I click the link, I see one sentence, a video that doesn’t load, an ad, and a demand to subscribe.

    • AlecSadler@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      Depending on the application and the way it is used, AI absolutely works as promised.

      But those instances are few and far between and the general hype and expectations are far and above yield.

      As someone who uses AI daily to produce faster and better, I absolutely hope the bubble pops and shit collapses.

  • Zorque@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    1 day ago

    “The AI output is always so shitty… the workers must be the problem, they’re clearly sabotaging our obviously perfect way to make perfect profits!”

    • BeigeAgenda@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      1 day ago

      The problem is that LLM’s sometimes gets the right answer and then you are like Wow this is the best! And the next minute you are thinking It must be me not giving enough context? Let me try a different model. which then also fails.

      • OpenStars@piefed.social
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        Intermittent reinforcement conditioning is literally the most powerful there is.

        People who are not aware of their biases are mastered by them.

        Stay curious folks!

  • Feyd@programming.dev
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    Sabatoging AI strategy = getting things done the way that works instead of the top down directive that doesn’t

    A tale as old as time