• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    10 days ago

    This is actually a good idea.

    Latent encoding/decoding is basically extreme, “good enough” image compression. It’s a pretty old ML technique, and the models to do it can be small (eg fast-ish). A cross-platform implementation is doable.

    If I were a game dev, I wouldn’t overuse it, but it seems like a great way to store huge, somewhat less frequently-used textures in VRAM.


    …But one thing I’m wondering is what they decode to? If its a huge bitmap (instead of a compressed texture format), that seems suboptimal. And I don’t think it’s decoded per pixel like BCx compression, unless I’m misremembering how it works. That granular access is kind of the point of texture compression.

    • ggtdbz@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 days ago

      That’s one thing that’s been bothering me a lot about this AI worship era that I don’t see mentioned enough anywhere. Machine learning is a fucking incredible tool that can surely be used to do a lot of things in a novel or better way. Instead all of the investment and eyeballs are on overplayed party tricks

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 days ago

        That’s what OpenAI wants.

        They want “Our AI vs no AI.” They want to stoke the simplistic ML hate because the thing they fear most is it being viewed for what it is: dirt cheap, dumb, albeit useful tools. Like a set of specialized hammers.

        It’s what I keep trying to tell the “Fuck AI” crowd, but no one wants to hear it, and they’re playing right into what these stupid Tech Bros want.

  • Sneezycat@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    11 days ago

    Great, more vendor-locked solutions to problems created by them.

    It really is impressive how Nvidia manages to push everyone into buying from them, by enshittifying the whole market.

    • Darkaga@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 days ago

      While the work is being pushed by Nvidia, it’s based on an API tool that is being implemented into DX and Vulkan called “Cooperative Vectors.” This is just Nvidia’s implementation.

      It’ll just be up to each hardware vendor to support it.

  • Vik@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 days ago

    Seems pretty perf intensive for present hardware. Wonder how it will influence later designs, presume more area would go towards accelerating this.

    • kunaltyagi@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 days ago

      Smarter design is possible than throwing area at this. Intel and AMD both have patents on similar HW techniques