gptel icon indicating copy to clipboard operation
gptel copied to clipboard

Ollama response delayed or error

Open karthink opened this issue 1 year ago • 4 comments

I get the the error when I run gptel-send with the following configurations. It also takes 10 minutes before the response arrives. I also got Response Error: nil sometimes.

image
(setq-default gptel-model "mistral:latest" ;Pick your default model
                   gptel-backend (gptel-make-ollama "Ollama"
                                   :host "localhost:11434"
                                   :stream t
                                   :models '("mistral")))
  • gptel-curl:
{"model":"mistral:latest","created_at":"2024-02-10T22:43:58.789276Z","response":"","done":true,"total_duration":417571583,"load_duration":417073333}
(8d87d74a71c4ad1eb816d1778ae4e5db . 120)

  • gptel-log:
{
  "gptel": "request body",
  "timestamp": "2024-02-11 11:13:45"
}
  {
  "model": "mistral",
  "system": "You are a large language model living in Emacs and a helpful assistant. Respond concisely.",
  "prompt": "Test",
  "stream": true
}

ollama server is also active and ollama run mistral works normally. image

Originally posted by @luyangliuable in https://github.com/karthink/gptel/issues/181#issuecomment-1937366346

karthink avatar Feb 11 '24 00:02 karthink

The response from Ollama is empty.

Could you run (setq gptel-log-level 'debug), try to use Ollama and paste the contents of the *gptel-log* buffer? Please wait until either an error or a timeout.

karthink avatar Feb 11 '24 00:02 karthink

I got the following:

On gptel-log:

{
  "gptel": "request Curl command",
  "timestamp": "2024-02-11 13:06:13"
}
[
  "curl",
  "--disable",
  "--location",
  "--silent",
  "--compressed",
  "-XPOST",
  "-y300",
  "-Y1",
  "-D-",
  "-w(5242174a9fcb32555dea3157193c24d7 . %{size_header})",
  "-d{\"model\":\"mistral\",\"system\":\"You are a large language model living in Emacs and a helpful assistant. Respond concisely.\",\"prompt\":\"Generate while loop in rust.\",\"stream\":true}",
  "-HContent-Type: application/json",
  "http://localhost:11434/api/generate"
]

image

luyangliuable avatar Feb 11 '24 02:02 luyangliuable

It seems the problem may stem from Ollama itself. I attempted to execute the following command:

curl -X POST -d "{\"model\":\"mistral\",\"system\":\"You are a large language model living in Emacs and a helpful assistant. Respond concisely.\",\"prompt\":\"Generate while loop in rust.\",\"stream\":true}" -H "Content-Type: application/json" "http://localhost:11434/api/generate"

in the shell, but it ends up hanging for hours without any response.

luyangliuable avatar Feb 11 '24 03:02 luyangliuable

Has Ollama ever worked for you on this machine?

karthink avatar Feb 11 '24 18:02 karthink

Closing as there has been no response in 11 months.

karthink avatar Jan 02 '25 02:01 karthink