ollama
ollama copied to clipboard
Ollama 0.1.38 has high video memory usage and runs very slowly.
What is the issue?
I am using Windows 10 with an NVIDIA 2080Ti graphics card that has 22GB of video memory. I upgraded from version 0.1.32 to 0.1.38 with the goal of supporting loading multiple models and handling multiple concurrent requests. However, I noticed that under version 0.1.38, the video memory usage is very high, and the speed has become much slower.
I am using the "codeqwen:7b-chat-v1.5-q8_0" model. Under version 0.1.32, it used around 8GB of video memory and output approximately 10 tokens per second. However, under version 0.1.38, it is using 18.8GB of video memory, and based on my observation, it is only outputting 1-2 tokens per second.
OS
Windows
GPU
Nvidia
CPU
Intel
Ollama version
0.1.38
Can confirm 0.1.38 seems to want more video memory
@chenwei0930 you mention enabling concurrency... what settings are you using? In particular, when you set OLLAMA_NUM_PARALLEL we have to multiply the context by that number, and it looks like this model has a default context size of 8192, so if you set a large parallel factor that might explain what you're seeing. I wouldn't expect to see a drop in token rate for a single request though. Perhaps ollama ps
will help shed some light? Failing that, can you share server logs so we can see what might be going on?
Do you use option "size of content"? When I modify this option, the GPU memory drops
If you're still seeing unexpected memory usage, please share more details about your setup and I'll re-open the issue.