cross-posted from: https://lemmy.ml/post/45766694
Hey :) For a while now I use gpt-oss-20b on my home lab for lightweight coding tasks and some automation. I’m not so up to date with the current self-hosted LLMs and since the model I’m using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me) I just wanted to use the collective wisdom of lemmy to maybe replace my model with something better out there.
Edit:
Specs:
GPU: RTX 3060 (12GB vRAM)
RAM: 64 GB
gpt-oss-20b does not fit into the vRAM completely but it partially offloaded and is reasonably fast (enough for me)


I tried some new ones recently (though I have a 24GB GPU). Qwen3.5 9B is pretty impressive for such a small model for agentic stuff like Claude Code. (I can run the Opus distilled model quantized to 6 bit with the full 256k context and no CPU offloading). Gemma4 26B is good if I don’t need agentic stuff or a lot of context (it sucks for agentic stuff). You can probably run the smaller versions of these, or with less context.