chatgpt-shell takes over 7 seconds to run: ollama models are slow to validate
chatgpt-shell took too long to run so i did (elp-instrument-package "chatgpt-shell") and saw this in (elp-results):
chatgpt-shell 1 7.587093 7.587093
chatgpt-shell-start 1 7.587086 7.587086
chatgpt-shell--resolved-model 15 7.586741 0.5057827333
chatgpt-shell-model-version 330 7.5860030000 0.0229878878
chatgpt-shell-ollama--validate-command 1320 7.5674309999 0.0057329022
chatgpt-shell-ollama--fetch-model-versions 1320 7.5549990000 0.0057234840
chatgpt-shell--prompt-pair 4 6.073431 1.51835775
chatgpt-shell--shell-info 5 5.062926 1.0125852
chatgpt-shell--model-short-version 5 5.062874 1.0125747999
chatgpt-shell--update-prompt 1 4.578275 4.578275
chatgpt-shell--model-label 5 2.523899 0.5047798
chatgpt-shell--make-buffer-name 1 1.513425 1.513425
chatgpt-shell-google--validate-command 2640 0.0003690000 1.397...e-07
chatgpt-shell-anthropic--validate-command 1980 0.0002900000 1.464...e-07
chatgpt-shell-openai--validate-command 330 0.0001559999 4.727...e-07
chatgpt-shell-kagi--validate-command 330 0.0001279999 3.878...e-07
chatgpt-shell-deepseek--validate-command 660 0.0001209999 1.833...e-07
chatgpt-shell--add-menus 1 5e-05 5e-05
chatgpt-shell--shrink-system-prompt 10 4.7e-05 4.7e-06
chatgpt-shell--system-prompt-name 5 3.500...e-05 7.000...e-06
chatgpt-shell--primary-buffer 3 2.100...e-05 7.000...e-06
chatgpt-shell--shell-buffers 3 1.499...e-05 4.999...e-06
chatgpt-shell-duplicate-map-keys 1 4e-06 4e-06
chatgpt-shell--prompt-regexp 4 0.0 0.0
ollama stuff takes lots of time and I don't even have it.
The mitigation is to restrict available models:
(setq chatgpt-shell-models (chatgpt-shell-openai-models))
Hey thanks for reporting with details! Sorry for the delay here...
Can you share what OS you're running on? Are you online/offline when starting the shell? Are you running ollama on your machine? I'm on macOS, running chatgpt-shell with and without Ollama running in the background and haven't been able to reproduce just yet.
I use macOS 14.5, I was online, not running ollama.
I was hitting a lot of really bad slowness.
I believe the issue is because if you do not have chatgpt-shell-model-version set -- it tries to validate all the models to see which one to choose. There are a lot of models so this takes a long time and happens syncronously.
Setting chatgpt-shell-model-version to a specific model fixed the slowness for me.
@proger -- I wonder if you didn't have it set?
Hmmm...
Actually - maybe that's not helpful. It seems you were wondering about the ollama models taking specifically long --
That I don't know... but setting the chatgpt-shell-model-version makes it not relevant 😄 🤷