SONG Ge
SONG Ge
Hi @RonkyTang , we have release a new ollama version https://www.modelscope.cn/models/Intel/ollama .
Hi @RonkyTang , I apologize for the late reply. The memory usage depends on many factors, including different values of `num_parallel` and `num_ctx`. You can try adjusting these parameters to...
Hi @shailesh837 , we will inform you when we make progress.
We do not support https://huggingface.co/microsoft/Florence-2-large currently.
Hi @bibekyess, you may install our latest v0.6.2 ipex-llm ollama in https://github.com/ipex-llm/ipex-llm/releases/tag/v2.3.0-nightly, which could support `gemma3-fp16`.
Hi @bibekyess , I have fixed the `ngl` issue, you may try it tmr via `pip install --pre --upgrade ipex-llm[cpp]` or download a zip from [link](https://www.modelscope.cn/models/Intel/ollama/files). Good luck for you...
Hi @bibekyess, based on our investigation, we do not support running Gemma3 in our current version. As Gemma3 uses an entirely new graph structure that is incompatible with our existing...
Hi @GamerSocke , we are reproducing your issue, we will inform you once we get a solution.
Hi @brownplayer , could you show the versions of the dependencies you have installed to run open-webui (the result displayed by pip list)? Also, you may refer to [ipex-llm open-webui...
Same as https://github.com/intel-analytics/ipex-llm/issues/11907 Could you please try to downgrade transformers version through `pip install transformers==4.37.0 accelerate`