SONG Ge
SONG Ge
v0.6.2 has been released. You may install it via `pip install --pre --upgrade ipex-llm[cpp]`.
See `time=2025-04-24T11:13:00.708+08:00 level=INFO source=routes.go:1298 msg="Listening on 127.0.0.1:11434 (version 0.6.5)"`, I believe you are running the community edition of Ollama.
> Same issue, any updates ? ` level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"` Hi it's a normal behaviour. You will see the gpu usage after loading a model.
Hi, we are reproducing this issue and will reply back to you soon.
Hi @AlexXT @y1xia0w @suning-git @vladislavdonchev , for deepseek-r1 issue, you may setting `num_ctx` to a larger value as a workaround. Please follow the setps below. 1. Create a file named...
> I have the same issue. My hardware is Intel Core Ultra 9 185H with integrated GPU. I have two copies of ollama : one use cpu, the other one...
> Thanks for the workaround! Is this an issue in ipex-llm[cpp] or ollama? could you say very briefly what caused it? I plan to use ipex-llm in my code. Ollama's...
> I've been having the same issue, and I tried increasing the `num_ctx` to `8192` and still have the same issue. The first generation seems to work just fine, but...
> Hello, I am the OP above. Do you think this might help with my issue? If so, does it matter where I create the Modelfile, or run the command...
Hi @vincent-wsz, please follow our [official document](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md) to install ollama, and you may run ollama without `call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat“`.