• DarkCloud
    link
    fedilink
    arrow-up
    35
    ·
    4 months ago

    We tried helping billionaires and it didn’t work.

    • lmuel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 months ago

      Tbf im not sure how much it helps them if you’re using the LLM without an account

      • Whelks_chance
        link
        fedilink
        arrow-up
        12
        ·
        4 months ago

        Market share. They can show the usage figures to investors and ask for more cash

            • GeneralDingus@lemmy.cafe
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              I’m not sure what you mean by ideal. Like, run any model you ever wanted? Probably the latest ai nvidia chips.

              But you can get away with a lot less for smaller models. I have the amd mid range card from 4 years ago (i forget the model at the top of my head) and can run text, 8B sized, models without issue.

            • IngeniousRocks (They/She) @lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              A Relatively recent gaming-type setup with local-ai or llama.cpp is what I’d recommend.

              I do most of my AI stuff with an rtx3070, but I also have a ryzen 7 3800x with 64gb RAM for heavy models where I don’t so much care how long it takes but need the high parameter count for whatever reason, for example MoE and agentic behavior.