catwalk icon indicating copy to clipboard operation
catwalk copied to clipboard

Provider Request: Ollama

Open noahbald opened this issue 5 months ago • 6 comments

Hey I’d love to try out crush but I prefer to run my models locally. Is there any planned support for using models hosted by Ollama to on localhost, etc?

noahbald avatar Jul 31 '25 00:07 noahbald

Would love to see ollama or llamacpp server for local interference!

rinukkusu avatar Aug 01 '25 03:08 rinukkusu

Ollama is supported by the OpenAI provider, just set the base_url to http://localhost:11434/v1/ or whatever is appropriate.

arodland avatar Aug 01 '25 16:08 arodland

Using: ollama run qwen3-coder

Config ($HOME/.config/crush/crush.json):

{
	"$schema": "https://charm.land/crush.json",
	"providers": {
		"ollama": {
			"type": "openai",
			"base_url": "http://localhost:11434/v1/",
			"name": "Ollama",
			"models": [
				{
					"id": "qwen3-coder",
					"name": "Qwen3-Coder (Ollama)",
					"context_window": 256000,
					"default_max_tokens": 20000,
					"cost_per_1m_in": 0,
					"cost_per_1m_out": 0,
					"cost_per_1m_in_cached": 0,
					"cost_per_1m_out_cached": 0,
					"supports_attachments": true,
					"has_reasoning_efforts": false,
					"can_reason": false
				}
			]
		}
	}
}

It returns this error:

POST "http://localhost:11434/v1/chat/completions": 400 Bad Request 
{
  "message": "registry.ollama.ai/library/qwen3-coder:latest does not support tools",
  "type": "api_error",
  "param": null,
  "code": null
}

Is there something missing in the configuration?

AbeEstrada avatar Aug 01 '25 18:08 AbeEstrada

@AbeEstrada No, that seems to be correct. Qwen3-coder doesn't have the "tools" tag on the Ollama catalog.

arodland avatar Aug 01 '25 18:08 arodland

I'm not sure if Ramalama uses a different REST API but it's a similar tool and would also be appreciated.

https://github.com/containers/ramalama

tidux avatar Aug 04 '25 21:08 tidux

https://github.com/charmbracelet/catwalk/issues/10#issuecomment-3145424968

i used this but added the URL to the LM Studio server and it seems to be stuck too, i assume that there needs to be a new provider for locally running models

i found the LM Studio example but it still doesnt work, will try to text some options

BKR-dev avatar Aug 12 '25 08:08 BKR-dev