ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

An error occurred while running the qwen2.5:3b model.

Open JerryXu2023 opened this issue 1 year ago • 2 comments

I have updated version 20240924. when i run qwen2.5:3b model. I got the below error:

time=2024-09-25T08:35:39.339+08:00 level=INFO source=server.go:395 msg="starting llama server" cmd="D:\python\ai\llama-cpp\dist\windows-amd64\lib\ollama\runners\cpu_avx2\ollama_llama_server.exe --model D:\software\ollama_models\blobs\sha256-970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 999 --parallel 1 --port 56356" time=2024-09-25T08:35:39.532+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-25T08:35:39.532+08:00 level=INFO source=server.go:595 msg="waiting for llama runner to start responding" time=2024-09-25T08:35:39.542+08:00 level=INFO source=server.go:629 msg="waiting for server to become available" status="llm server error" time=2024-09-25T08:35:45.801+08:00 level=ERROR source=sched.go:456 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000135" [GIN] 2024/09/25 - 08:35:45 | 500 | 6.7307138s | 127.0.0.1 | POST "/api/embed"

Kindly help to confirm

JerryXu2023 avatar Sep 25 '24 00:09 JerryXu2023

Hi @JerryXu2023, we have reproduced your issue and are currently working on a fix. As a temporary workaround, you may install version 20240911.

sgwhat avatar Sep 25 '24 10:09 sgwhat

@sgwhat Thanks for your confirm. I will install 20240911.

JerryXu2023 avatar Sep 26 '24 00:09 JerryXu2023