Provider Request: Ollama
Hey I’d love to try out crush but I prefer to run my models locally. Is there any planned support for using models hosted by Ollama to on localhost, etc?
Would love to see ollama or llamacpp server for local interference!
Ollama is supported by the OpenAI provider, just set the base_url to http://localhost:11434/v1/ or whatever is appropriate.
Using: ollama run qwen3-coder
Config ($HOME/.config/crush/crush.json):
{
"$schema": "https://charm.land/crush.json",
"providers": {
"ollama": {
"type": "openai",
"base_url": "http://localhost:11434/v1/",
"name": "Ollama",
"models": [
{
"id": "qwen3-coder",
"name": "Qwen3-Coder (Ollama)",
"context_window": 256000,
"default_max_tokens": 20000,
"cost_per_1m_in": 0,
"cost_per_1m_out": 0,
"cost_per_1m_in_cached": 0,
"cost_per_1m_out_cached": 0,
"supports_attachments": true,
"has_reasoning_efforts": false,
"can_reason": false
}
]
}
}
}
It returns this error:
POST "http://localhost:11434/v1/chat/completions": 400 Bad Request
{
"message": "registry.ollama.ai/library/qwen3-coder:latest does not support tools",
"type": "api_error",
"param": null,
"code": null
}
Is there something missing in the configuration?
@AbeEstrada No, that seems to be correct. Qwen3-coder doesn't have the "tools" tag on the Ollama catalog.
I'm not sure if Ramalama uses a different REST API but it's a similar tool and would also be appreciated.
https://github.com/containers/ramalama
https://github.com/charmbracelet/catwalk/issues/10#issuecomment-3145424968
i used this but added the URL to the LM Studio server and it seems to be stuck too, i assume that there needs to be a new provider for locally running models
i found the LM Studio example but it still doesnt work, will try to text some options