Ollama error after a few requests - ubatch must be set as the times of VS
Hi, My config: A770 + Ollama + OpenWebui + intelanalytics/ipex-llm-inference-cpp-xpu:latest docker
After 2-3 chat message I get this error:
ollama_llama_server: /home/runner/_work/llm.cpp/llm.cpp/llm.cpp/bigdl-core-xe/llama_backend/sdp_xmx_kernel.cpp:191: void sdp_causal_xmx_kernel(const void *, const void *, const void *, const void *, const void *, const void *, float *, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int, const int, const int, const int, const int, const float, sycl::queue &) [HD = 128, VS = 32, RepeatCount = 8, Depth = 16, ExecuteSize = 8]: Assertion `(context_length-seq_len)%VS==0 && "ubatch must be set as the times of VS\n"' failed.
If click the 'refresh'/'again' button in the OpenWebui chat Ollama reloads the model and it works, but again, after a few messages it fails.
Interesting thing is if I keep pressing the refresh button it works every time.
I've tried multiple models and the OpenWebui in the Intel docker + the official latest version.
Can someone point me to the right direction? Thank you
"Interesting thing is if I keep pressing the refresh button it works every time."
I'd like to correct myself. I've tried this with smaller models and I can click refresh 10 times, it works every time, but now I tried it with Mistral-Small which is a larger model and it failed even with the refresh button, not just after a new message.
Screencast From 2024-10-17 03-37-55.webm
You can see a first few fails first try, ollama reloads the model every time, but than by clicking only the refresh button it works perfectly.
"Interesting thing is if I keep pressing the refresh button it works every time."
I'd like to correct myself. I've tried this with smaller models and I can click refresh 10 times, it works every time, but now I tried it with Mistral-Small which is a larger model and it failed even with the refresh button, not just after a new message.
I encountered a similar problem "ubatch must be set as the times of GS".
Thank you for your feedback, you may try ipex-llm[cpp] latest (version number >= 10.17) tomorrow.
Thank you for your feedback, you may try ipex-llm[cpp] latest (version number >= 10.17) tomorrow.
The original issue is fixed, no errors and it doesn't reload, but after the first request the response it total nonsense, just random text. Same behavior with all models.
If you need any more information, please let me know. I really appreciate the help.
Hi everyone, we have done another attempt and it is working on test cases, pls try again tomorrow with latest ipex-llm[cpp]