S. Mellouk
S. Mellouk
@dhiltgen Integrated GPU
@dhiltgen can you help me to understand why it works with docker ?
@dhiltgen It's working fine in this setup: * Proxmox running on host machine * Docker running in Debian LXC * Ollama running on docker * Shared GPU **Ollama PS:** ```...
@dhiltgen this is current logs, but I can't spot any difference, any help from you is appreciated ๐ ON LXC ``` Jun 20 00:11:55 ai-llm ollama[377]: 2024/06/20 00:11:55 routes.go:1008: INFO...
okay chatgpt did a great job ๐ Here are the differences between the two logs: ### Configuration Differences - **LXC Log**: - **OLLAMA_ORIGINS**: `[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:*...
The logs after trying to load the tinyllama on LXC ``` Jun 20 01:00:25 ai-llm ollama[15472]: [GIN] 2024/06/20 - 01:00:25 | 200 | 20.256ยตs | 127.0.0.1 | HEAD "/" Jun...
So it seems it's trying to do something related to CUDA ๐ค ``` library was not found (discovered GPU libraries paths=[]) cudaSetDevice err: 35 error="your nvidia driver is too old...
@dhiltgen yes I have installed rocm on the lxc machine, it's weird that it behaves differently, is it because lxc based on ubuntu and docker image is based on centos...
I can give it a try to not install rocm, I will share my findings. Regarding the rocm version, on LXC I'm using "6.1.1".
> If you aren't using ROCm for anything else on the host, a potential workaround is uninstall it, but we shouldn't stumble on a ROCm install and fail like this....