• RedWizard [he/him, comrade/them]
    ·
    2 months ago

    This isn’t about quality. This isn’t about learning. This is about control.

    What is it with AI slop and this exact line of phrasing? You see it everywhere. Maybe it's just something humans have fallen into as a kind of rhetorical crutch when writing, and the AI has picked up on it. "This isn't about X, This isn't about Y, It's Z with a new coat of paint!", "It isn't about Left, it isn't about Right, it's about moving Forward!", "It isn't about Pineapple, It isn't about Ham, it's about Peperoni Hegemony!"

    • BobDole [none/use name]
      ·
      2 months ago

      I always see the comment “when will people realize it’s not left vs right it’s up vs down” under political tiktok videos and I want to strangle whoever wrote the comment. Now I’m sure it’s probably some bot shit

      • prole [any, any]
        ·
        2 months ago

        Fucking same. Left v right is up v down, they just have no idea what "left" means.

      • RedWizard [he/him, comrade/them]
        ·
        2 months ago

        Yeah that one gets me too. It's so vapid. Cool, its "up vs down". What does that mean to you TikTok commenter? Who falls into the "up" category in your rubric TikTiker?

        They don't know. Because its some thought terminating cliche.

    • BodyBySisyphus [he/him]
      ·
      2 months ago

      I'm assuming it's a product of the reinforcement learning; it must appeal to the people who rate responses and as a result got artificially dialed up from its ordinary background level.

    • hello_hello [undecided, comrade/them]M
      ·
      2 months ago

      LLMs constantly do the rule of threes because it probably picked it up from motivational speakers or some shit.

      The trainers probably think "muh nuance" was good and kept feeding it cookies.

    • Damarcusart [he/him, comrade/them]
      ·
      2 months ago

      I think it is because it can't actually structure an argument properly, because it has no logic. It can't tell what is and isn't true, and so tries to simulate how ideas get conveyed, but without any ability to actually assess an idea and provide evidence for it or against it.

      • invalidusernamelol [he/him]
        ·
        2 months ago

        You can just keep telling it that it's wrong, even when it's right, and it'll go "my mistake! You're totally right!"

    • Owl [he/him]
      ·
      2 months ago

      Turns out that learning the underlying meaning of sentences from a long stream of unfiltered internet is too hard to do by just wiggling a bunch of multiplication matrices. But you can get long chunks of accurate text predictions if you focus on every cliche that avoids communicating any meaning.