gptel
gptel copied to clipboard
issues with ollama
Hi i've somewhat copypasted the config with minor modification to use dolphin-mistral
(setq
gptel-model "dolphin-mistral:latest"
gptel-backend (gptel-make-ollama "Ollama"
:host "localhost:11434"
:stream t
:models '("dolphin-mistral:latest")))
ollama is up and running but gptel starts chatgpt 3.5 and furthermore ollama/dolphin-mistral is not available in the M-x gptel-menu
, GPT-model section
am i missing something or this is a bug?
- Are you on the latest commit? If not please update gptel
- Could you run this:
(map-keys gptel--known-backends)
and paste the output here?
Are you on the latest commit? If not please update gptel
- Could you run this:
(map-keys gptel--known-backends)
and paste the output here?
@lbraglia Did you have a chance to try this?
not yet, i'll try it asap. BTW the version i've used is the one available in MELPA.
Thanks for follow up
Any updates?
ok, ive updated now to
gptel 20240425.2058 installed Interact with ChatGPT or other LLMs
and with
(map-keys gptel--known-backends)
i obtain ("ChatGPT")
while from gptel-menu, selecting -m GPT Model i have
Click on a completion to select it.
In this buffer, type RET to select the completion near point.
8 possible completions:
ChatGPT:gpt-3.5-turbo ChatGPT:gpt-3.5-turbo-16k ChatGPT:gpt-4
ChatGPT:gpt-4-0125-preview ChatGPT:gpt-4-1106-preview ChatGPT:gpt-4-32k
ChatGPT:gpt-4-turbo ChatGPT:gpt-4-turbo-preview
and with
(map-keys gptel--known-backends)
i obtain
("ChatGPT")
The Ollama backend is not being defined then. Does your setq
form throw any errors? Can you try evaluating it manually?
After a loong time I had time to go back on it and with current gptel and emacs 29.4 all works fine with local ollama. Thank you again!