Yixiao

Results 1 comments of Yixiao

We are also having this issue while serving `llama3.2:3b-instruct-q4_K_M` with Ollama using the docker image `intelanalytics/ipex-llm-inference-cpp-xpu:2.2.0-SNAPSHOT@sha256:d2f6320fa5506789c5c753b979ef0070a6ae3d4bf0860171b7f226a0afe89c59`. OS: Windows 11 24H2, WSL2 CPU/GPU: Intel Ultra 5 125H/Arc integrated GPU It seems...