Ronky
Ronky
> Hi [@RonkyTang](https://github.com/RonkyTang), we have released the optimized version on ubuntu, which could run the clip model on GPU. You may install it via `pip install --pre --upgrade ipex-llm[cpp]` Hi...
Hi @sgwhat , the PreView version has a problem,we can't to use iGPU, : but the release version can to used:
> This is expected behavior — Ollama does not utilize the iGPU until a model is loaded, at which point you will see VRAM usage increase. As for the confusing...
ok ,I hope it's just a log printing error 
Hi @sgwhat how to make a like ollama portable package? And i copied all the libraries that ollama bin depends on to the ollama-bin directory and set environment variables, but...
> Hi [@sgwhat](https://github.com/sgwhat) how to make a like ollama portable package? And i copied all the libraries that ollama bin depends on to the ollama-bin directory and set environment variables,...
> Hi [@RonkyTang](https://github.com/RonkyTang) , we have release a new ollama version https://www.modelscope.cn/models/Intel/ollama . Hi @sgwhat Thank you for the updated. But it still has memory issues.
> > Hi [@sgwhat](https://github.com/sgwhat) how to make a like ollama portable package? And i copied all the libraries that ollama bin depends on to the ollama-bin directory and set environment...
> > > Hi [@sgwhat](https://github.com/sgwhat) how to make a like ollama portable package? And i copied all the libraries that ollama bin depends on to the ollama-bin directory and set...
> Hi [@RonkyTang](https://github.com/RonkyTang) , I apologize for the late reply. The memory usage depends on many factors, including different values of `num_parallel` and `num_ctx`. You can try adjusting these parameters...