Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code

71 points - today at 5:13 PM

Source

Comments

trvz today at 7:11 PM

  ollama launch claude --model gemma4:26b
jonplackett today at 7:17 PM
So wait what is the interaction between Gemma and Claude?
vbtechguy today at 5:13 PM
Here is how I set up Gemma 4 26B for local inference on macOS that can be used with Claude Code.
Someone1234 today at 7:25 PM
Using Claude Code seems like a popular frontend currently, I wonder how long until Anthropic releases an update to make it a little to a lot less turn-key? They've been very clear that they aren't exactly champions of this stuff being used outside of very specific ways.
martinald today at 7:44 PM
Just FYI, MoE doesn't really save (V)RAM. You still need all weights loaded in memory, it just means you consult less per forward pass. So it improves tok/s but not vram usage.