cross-posted from: https://lemmy.ml/post/45766694
Hey :) For a while now I use gpt-oss-20b on my home lab for lightweight coding tasks and some automation. I’m not so up to date with the current self-hosted LLMs and since the model I’m using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me) I just wanted to use the collective wisdom of lemmy to maybe replace my model with something better out there.
Edit:
Specs:
GPU: RTX 3060 (12GB vRAM)
RAM: 64 GB
gpt-oss-20b does not fit into the vRAM completely but it partially offloaded and is reasonably fast (enough for me)
I tried some new ones recently (though I have a 24GB GPU). Qwen3.5 9B is pretty impressive for such a small model for agentic stuff like Claude Code. (I can run the Opus distilled model quantized to 6 bit with the full 256k context and no CPU offloading). Gemma4 26B is good if I don’t need agentic stuff or a lot of context (it sucks for agentic stuff). You can probably run the smaller versions of these, or with less context.
I’m using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me)
I mean yeah, more than 6 months in AI world is an eternity 🤣
the big ones are gemma 4 and qwen 3.5
I’m using Gemma 4 and it works really really well, it’s sad to me that I’m using big tech’s model but it’s just so far ahead of mistral and others that I have no choice
Qwen is really good with thinking turned off, turned on it has a massive overthinking problem, like you say “hi” and it’ll think for 3 minutes on how best to reply
Still waiting for Deepseek to come out with v4 at this stage but Gemma 4 is my current sota self hosted model
I think people are sleeping on GLM.
Tried it out recently and I like the results a lot so far.
GLM4.5 and 4.7 was good already, now they released 5 and 5.1 https://github.com/zai-org/GLM-5
It says it’s for vibecoding but I use it like I would use chatgpt and it gives useable ansers to all of my varied questions. (ofc. you always have to check for correctness, even if it’s correct most of the time, which I do cause I’m paranoid)
I guess the only downside is how frigging huge it is.
I guess the only downside is how frigging huge it is.
Yep :D
I saw 5.1 came out however it required a data centre to run :X
Hoping to see if they release smaller models how they do
Definitely give Gemma4 26ba4b a try
It’s MOE so you should be able to get the same offload, and a4b can be plenty fast.
It has decent world knowledge for the size, and from what I can tell is any at small scale coding in common languages like Python.
I’ve been using gemma4:26b, it’s pretty good, although a bit slow even on a 3090,and idk how smaller versions compare



