goose icon indicating copy to clipboard operation
goose copied to clipboard

Auto-discover existing ollama models

Open RichardoC opened this issue 10 months ago • 3 comments

Currently, you have to add ollama models manually. It would be better if Goose discovered the existing models and made them available without having to manually add them.

The current Goose behaviour for me is similar to this openwebui bug https://github.com/open-webui/open-webui/discussions/4376

RichardoC avatar Feb 13 '25 10:02 RichardoC

refer it

PrinceSajjadHussain avatar Feb 26 '25 11:02 PrinceSajjadHussain

The ollama model list should be easy to get and parse instead of manually typing them in. E.g. from bash I can do: curl -s http://$OLLAMA_HOST:11434/api/tags | jq -r '.models[].name' gemma3:1b llama3:latest

jmuggli avatar Mar 26 '25 21:03 jmuggli

Seeing the selection UI I thought it's already automatic and came here to submit a bug that it does not work. So there's a config file, but it seems you can only put one model there, rather than a selection???

petri avatar Jun 02 '25 07:06 petri

This something a great idea, but putting it in the icebox for now - we're looking at improving onboarding where we can revisit

DOsinga avatar Jul 02 '25 19:07 DOsinga

I am picking this up shortly - will probably use this to track the general onboarding experience (which includes letting people install ollama if not already there)

michaelneale avatar Jul 31 '25 03:07 michaelneale

Also as it will be getting started, we will have to curate what models we want to list/offer, as in general most (by number) won't be of much use other than in chat mode.

michaelneale avatar Jul 31 '25 03:07 michaelneale

ollama experience is by default now more curated - it will get the model for you if you use that flow.

However - we could also list models that are already there - however I don't think that is super useful as most will not work (tool support required) and each would would have to be tried (which causes ollama to load into GPU/memory inefficiently) which would be a very unpleasant experience. When ollama has ability to enumerate models with actual working tool calling could do that (but their metadata is usually incorrect as things that claim to do tool calling often don't). If things started in chat mode this would be ok (but not really an agent). Will park this for now and review as we get more apis/models on board.

michaelneale avatar Aug 08 '25 02:08 michaelneale