chnxq
chnxq
https://github.com/chnxq/ollama/tree/chnxq/add-oneapi You can try the above one. It uses the latest ollama version, but it has only been tested on Windows. ref: /llama/README-Intel-OneApi.md
Yes, there may be problems with multiple graphics cards. I only managed to run it on my laptop, without other test conditions. ollama-intel-gpu.bat in zip file is used to show...
> Hi [@bibekyess](https://github.com/bibekyess), you may install our latest v0.6.2 ipex-llm ollama in https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly, which could support .`gemma3-fp16` Hi @sgwhat I made the Ollama's source code support OneApi by making changes....
Great! I'm looking forward to its arrival.
Have made it concave now? Even if there is an unstable branch?
https://github.com/chnxq/ollama/tree/chnxq/add-oneapi You can try the above one. It uses the latest ollama version,gemma3:12b is work. but it has only been tested on Windows. ref: /llama/README-Intel-OneApi.md
https://github.com/intel/ipex-llm/issues/13070 Someone has also testit on Linux.