SONG Ge
SONG Ge
### Step 1. Start the ollama server as command below: ```bash export ONEAPI_DEVICE_SELECTOR=level_zero:1 ./ollama serve ``` If this step could return `libmkl.so` related error, please turn to step 2. ###...
It seems there is an error when running Starcoder2. We are working on resolving this issue.
hi @js333031 , we have fixed this issue, you could install our latest version of ollama through `pip install --pre --upgrade ipex-llm[cpp]` tmr.
> With new update, it does not get stuck but looks like only CPU is used. Should GPU (A770m) work? > Yes A770m should work, but please install the latest...
Hi @js333031 , we found that `starcoder2` model produces garbage output due to **poor support** for the ollama official template. We haven't found a better template in the community so...
Hi @ybeetle8 , qwen3 model has been supported in our latest version. You may install it via `pip install --pre --upgrade ipex-llm[cpp]`, see https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md for more usage details.
You may switch to `transformers==4.37.0` and try it again.
Hi @yunbiaolin , we cannot reproduce your issue, ollama works fine on our windows arc device. Could you please provide more details how you install and run ollama?
Hi @adi-lb-phoenix, could you please provide your env and device config? In our test, ollama was able to run codellama as expected on MTL Linux.
We are currently locating the cause of the `codellama` output issue on linux arc770 and will notify you as soon as possible.