- How much VRAM do you have?
- Which GPU?
- What sort of coding do you want to do?
No point in telling you “yo, dude, just grab MinMax 2.7 or GLM5.1”…unless you happen to have several GPUs running concurrently with a total combined VRAM pool of 500GB or more.
There are strong local contenders… (Like Qwen3-Coder-Next but as you can see, the table ante is probably in the 45GB vram range just to load them up. Actually running them with a decent context length is likely to mean you need to be in the 80-100GB range.
Do-able…but maybe pay $10 on OpenRouter first to test drive them before committing to $2000+ worth of hardware upgrades.
There are other, more reasonable, less hardware dependent uses for local LLMs, but if you want fully local coders, it’s the same old story: pay to play (and that’s even if you don’t mind slow speed / overnight batch jobs).
Right now, cloud-based providers are hemorrhaging money because they know it will lead to lock-in (ie: people will get use to what can be achieved with SOTA models, forgetting the multi-million dollar infrastructure required to run them). Then, when they realize you can’t quite do the same with local gear (at least, without spending $$$), they can ratchet the prices up.
Codex pro-plan just went to $300/month.
We’ve seen this playbook before, right?