enchanted icon indicating copy to clipboard operation
enchanted copied to clipboard

Increase connection Timeout

Open shafiqalibhai opened this issue 9 months ago • 9 comments

Maybe add a variable in settings to change the default timeout setting.

shafiqalibhai avatar May 11 '24 16:05 shafiqalibhai

Seconded, since sometimes the initial load of the model into Ollama times out for big models and you have to re-submit your prompt. Once it's "warmed up" it's fine.

Maybe there's an alternative way for the program to see if Ollama is still running and just taking a long time to respond.

adamierymenko avatar May 14 '24 17:05 adamierymenko

Totally agree on this. timeout needs to be increased.

haydonryan avatar May 16 '24 21:05 haydonryan

This is importart for folks with low end hardware. I agree, should be in the settings.

egaralmeida avatar May 19 '24 04:05 egaralmeida

It's important even for high end hardware if you're using a giant model. Sometimes the initial model load times out and you have to resubmit, after which it works.

adamierymenko avatar May 29 '24 16:05 adamierymenko

100% i'm running a 16 core epyc as my LLM machine - it really chugs trying to load mixtral 8x22b even if loading of NVME into ram.

haydonryan avatar May 29 '24 16:05 haydonryan

+1

mcr-ksh avatar Jun 11 '24 21:06 mcr-ksh

+1 Running on a 2690 v4 Xeon in a Alpine Linux VM on proxmox.

bignay2000 avatar Jun 20 '24 00:06 bignay2000

Can confirm that this is still a problem. First attempt to use llama3.1:70b on an M1 Max laptop times out waiting for a response. If I "edit" the question and resubmit it works fine.

adamierymenko avatar Jul 24 '24 17:07 adamierymenko

Have the same problem with larger models on my computer. An adaptable timeout setting would be awesome

kuyper avatar Aug 27 '24 10:08 kuyper

Second this. I am getting cut off responses with my remote ollama setup. Since other clients are working, I imagine it may be caused by timeout

kov avatar Oct 20 '24 16:10 kov