gptel
gptel copied to clipboard
Ollama response delayed or error
I get the the error when I run gptel-send
with the following configurations. It also takes 10 minutes before the response arrives. I also got Response Error: nil
sometimes.
(setq-default gptel-model "mistral:latest" ;Pick your default model
gptel-backend (gptel-make-ollama "Ollama"
:host "localhost:11434"
:stream t
:models '("mistral")))
- gptel-curl:
{"model":"mistral:latest","created_at":"2024-02-10T22:43:58.789276Z","response":"","done":true,"total_duration":417571583,"load_duration":417073333}
(8d87d74a71c4ad1eb816d1778ae4e5db . 120)
- gptel-log:
{
"gptel": "request body",
"timestamp": "2024-02-11 11:13:45"
}
{
"model": "mistral",
"system": "You are a large language model living in Emacs and a helpful assistant. Respond concisely.",
"prompt": "Test",
"stream": true
}
ollama server is also active and ollama run mistral
works normally.
Originally posted by @luyangliuable in https://github.com/karthink/gptel/issues/181#issuecomment-1937366346
The response from Ollama is empty.
Could you run (setq gptel-log-level 'debug)
, try to use Ollama and paste the contents of the *gptel-log*
buffer? Please wait until either an error or a timeout.
I got the following:
On gptel-log:
{
"gptel": "request Curl command",
"timestamp": "2024-02-11 13:06:13"
}
[
"curl",
"--disable",
"--location",
"--silent",
"--compressed",
"-XPOST",
"-y300",
"-Y1",
"-D-",
"-w(5242174a9fcb32555dea3157193c24d7 . %{size_header})",
"-d{\"model\":\"mistral\",\"system\":\"You are a large language model living in Emacs and a helpful assistant. Respond concisely.\",\"prompt\":\"Generate while loop in rust.\",\"stream\":true}",
"-HContent-Type: application/json",
"http://localhost:11434/api/generate"
]
It seems the problem may stem from Ollama itself. I attempted to execute the following command:
curl -X POST -d "{\"model\":\"mistral\",\"system\":\"You are a large language model living in Emacs and a helpful assistant. Respond concisely.\",\"prompt\":\"Generate while loop in rust.\",\"stream\":true}" -H "Content-Type: application/json" "http://localhost:11434/api/generate"
in the shell, but it ends up hanging for hours without any response.
Has Ollama ever worked for you on this machine?
Closing as there has been no response in 11 months.