• LostWanderer@fedia.io
    link
    fedilink
    arrow-up
    18
    ·
    21 hours ago

    Ah, LLM glazing in the news…Given that LLMs aren’t that profitable, they have to scare people into believing this bubble won’t burst like a bloated cyst.

    • Ooops@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      20 hours ago

      To be fair there are indeed some who don’t just tell the story to keep the inflated stocks going, but are indeed true believers in the usefulness of AI and how it will replace employees.

      Because they hate paying people a wage so much that they would convince themselves of any insanity if it means they can get rid of the plebs.

  • thallamabond@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    20 hours ago

    Most tech observers say A.I. probably wasn’t the cause of those layoffs because, at the time, it wasn’t yet good enough to replace coders. Other factors, they figure, were more significant: Interest rates rose, so tech firms lost their easy growth money. Companies that overhired shed that excess capacity. Some also suspect that when Elon Musk bought Twitter and said he laid off 80 percent of his work force, tech executives at other firms took note and decided that maybe they didn’t need so many engineers either.

    neat

      • Saganaki@lemmy.zip
        link
        fedilink
        arrow-up
        7
        ·
        19 hours ago

        Claude has a place for simple tasks. It does a great job of being an “advanced find in files” or being a “smart boilerplate generators” but anything remotely more complex and the issues start to show, really quickly. When writing new prototype code, it does very well. But when you have to handle all sorts of edge cases, it doesn’t do it as well. It also doesn’t do a good job with debugging anything more than surface level. Opus does slightly better, but even then gets into the frequent “I found the issue! oh wait, that’s fine. I found the issue! Oh wait, that’s fine” loop over and over again.

        And before you ask: Yes, I’ve been using Copilot CLI for work pretty regularly for the past couple months.

        Aside: It doesn’t help that the true token costs are off by a factor of 100-1000. Yes, I know general reporting is saying the breakeven is 10x, but…well, you’ll have to trust me that’s not accurate.

          • CorrectAlias@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 hours ago

            YoU AReN’T pRoMpTInG PrOPerLy

            A classic. LLMs can help if you know what you’re doing on your own, at the cost of having to burn tokens while basically having to guide it to the result you need. The second you use it for something you’re not an expert in, you enter pure slop territory. The fact is that LLMs are constantly hallucinating but with guardrails that allow them to sometimes hallucinate the correct result.

            And yes, I’m in tech and have experience. The only way it’s going to “take” my job is if it crashes the entire economy because the bubble pops or an executive needs to pump some stock from braindead morons on wall street.

          • NOT_RICK@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            18 hours ago

            I’m a programmer. There is a narrow lane of things I will trust it to assist me with, and none of that without review. I’m not worried about my job in the slightest, barring a significant leap in the tech that I frankly do not see happening any time soon.

            All these people building vibe coded products are building houses of cards that are more likely than not security nightmare black boxes.

          • owenfromcanada@lemmy.ca
            link
            fedilink
            arrow-up
            11
            ·
            20 hours ago

            I work in software on automotive safety-critical software. I left LLMs in the trash.

            So yes, I am using it properly.