devika icon indicating copy to clipboard operation
devika copied to clipboard

Ollama Local model issue after update

Open HashemHamdy opened this issue 1 year ago • 4 comments
trafficstars

after clone yesterday version the local model can't be detected to reply even one or two steps like the previous version 24.04.26 21:38:46: root: ERROR : Inference took too long. Model: OLLAMA, Model ID: llama3 24.04.26 21:38:46: root: INFO : SOCKET inference MESSAGE: {'type': 'error', 'message': 'Inference took too long. Please try again.'} 24.04.26 21:38:46: root: WARNING: Inference failed

HashemHamdy avatar Apr 26 '24 13:04 HashemHamdy

Same problem for me! Hopefully this get's fixed soon, I was really excited to try this out.

nemsip avatar Apr 27 '24 01:04 nemsip

i am also face the same problem 24.04.27 11:48:22: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 730} Model: mistral, Enum: OLLAMA 24.04.27 11:48:23: root: INFO : SOCKET inference MESSAGE: {'type': 'time', 'elapsed_time': '0.00'} 24.04.27 11:49:24: root: ERROR : Inference took too long. Model: OLLAMA, Model ID: mistral 24.04.27 11:49:24: root: INFO : SOCKET inference MESSAGE: {'type': 'error', 'message': 'Inference took too long. Please try again.'} 24.04.27 11:49:24: root: WARNING: Inference failed

maysaraanalyst avatar Apr 27 '24 07:04 maysaraanalyst

Also for me with a 3090 Inference took too long I just use a 20GB command-r verison, normal ollama chat works superfast

iChristGit avatar Apr 27 '24 20:04 iChristGit

Same here.

shahab00x avatar Apr 27 '24 21:04 shahab00x

now you can update the inference timeout via the settings page. fetch latest changes.

ARajgor avatar May 02 '24 16:05 ARajgor