SheldonChen

Results 4 comments of SheldonChen

Hi @wjuncn, with https://github.com/ipex-llm/ipex-llm/releases/download/v2.3.0-nightly/ollama-ipex-llm-2.3.0b20250708-win.zip, ollama run deepseek-r1:7b works fine on Windows ARC dGPU. To help us reproduce and identify the issue you are experiencing, could you provide the following information?...

Hi @hurui200320 @shivabohemian, Thanks for reporting this issue! We tried to reproduce it on a Linux system with Intel ARC dGPU (using the `intelanalytics/ipex-llm-inference-cpp-xpu:2.3.0-SNAPSHOT` container), and the model `deepseek-r1:8b` runs...

Hi @shivabohemian, Thanks for your information. However, the information provided in your reply: ``` root@2c444a1a4f24:/llm# sycl-ls [level_zero:gpu][level_zero:0] Intel(R) oneAPI Unified Runtime over Level-Zero, Intel(R) Graphics 12.4.0 [1.6.32224.500000] [opencl:cpu][opencl:0] Intel(R) OpenCL,...

@shivabohemian We apologize for the inconvenience. The `Intel® Processor N150` is a newly released CPU, and our current Ollama Docker image may not yet be fully optimized for this specific...