ollama icon indicating copy to clipboard operation
ollama copied to clipboard

8 GPUs want to start 8 same models

Open AltenLi opened this issue 4 days ago • 2 comments

I tried so many methods. environment: win10 22H, latest, nightly

methods: 1.multi ollama serve: failed on gpu split. 2.multi ollama docker: --gpus all & CUDA_VISIBLE_DEVICES=0 / --gpus all & CUDA_VISIBLE_DEVICES=1 and so on.. use local different copied model path into docker (to save download time), load model extremely slow, 32B used 15min. and often killed by gpu vram..

need: multi same model server locally.

Thanks!!!

AltenLi avatar Feb 24 '25 04:02 AltenLi