CUDA error 2: out of memory (for a 33 billion param model, but I have 39GB of VRAM available across 4 GPUs)
The model I'm trying to run is deepseek-coder:33b and journalctl -u ollama outputs:
Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:292: 39320 MB VRAM available, loading up to 101 GPU layers
Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:421: starting llama runner
Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:479: waiting for llama runner to start responding
Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: found 4 CUDA devices:
Dec 11 18:31:37 x99 ollama[25964]: Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1
Dec 11 18:31:37 x99 ollama[25964]: Device 1: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1
Dec 11 18:31:37 x99 ollama[25964]: Device 2: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1
Dec 11 18:31:37 x99 ollama[25964]: Device 3: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1
Dec 11 18:31:39 x99 ollama[26042]: {"timestamp":1702290699,"level":"INFO","function":"main","line":2534,"message":"build info","build":375,"commit":"9656026"}
Dec 11 18:31:39 x99 ollama[26042]: {"timestamp":1702290699,"level":"INFO","function":"main","line":2537,"message":"system info","n_threads":18,"n_threads_batch":-1,"total_threads":36,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
Dec 11 18:31:39 x99 ollama[25964]: llama_model_loader: loaded meta data with 22 key-value pairs and 561 tensors from /usr/share/ollama/.ollama/models/blobs/sha256:137fe898f00f9b709b8ca96c549f64ad6a36ab85720cf10d3c24ac07389ab8fb (version GGUF V2)
---[snip]---
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: ggml ctx size = 0.21 MiB
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: using CUDA for GPU acceleration
Dec 11 18:31:39 x99 ollama[25964]: ggml_cuda_set_main_device: using device 0 (NVIDIA GeForce GTX 1080 Ti) as main device
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: mem required = 124.24 MiB
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloading 62 repeating layers to GPU
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloading non-repeating layers to GPU
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloaded 65/65 layers to GPU
Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: VRAM used: 17822.33 MiB
Dec 11 18:31:43 x99 ollama[25964]: ...................................................................................................
Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: n_ctx = 16384
Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: freq_base = 100000.0
Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: freq_scale = 0.25
Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: offloading v cache to GPU
Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: offloading k cache to GPU
Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: VRAM kv self = 3968.00 MiB
Dec 11 18:31:45 x99 ollama[25964]: llama_new_context_with_model: kv self size = 3968.00 MiB
Dec 11 18:31:45 x99 ollama[25964]: llama_build_graph: non-view tensors processed: 1430/1430
Dec 11 18:31:45 x99 ollama[25964]: llama_new_context_with_model: compute buffer total size = 1869.07 MiB
Dec 11 18:31:46 x99 ollama[25964]: llama_new_context_with_model: VRAM scratch buffer: 1866.00 MiB
Dec 11 18:31:46 x99 ollama[25964]: llama_new_context_with_model: total VRAM used: 23656.33 MiB (model: 17822.33 MiB, context: 5834.00 MiB)
Dec 11 18:31:46 x99 ollama[25964]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7973: out of memory
Dec 11 18:31:46 x99 ollama[25964]: current device: 0
Dec 11 18:31:47 x99 ollama[25964]: 2023/12/11 18:31:47 llama.go:436: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7973: out of memory
Dec 11 18:31:47 x99 ollama[25964]: current device: 0
Dec 11 18:31:47 x99 ollama[25964]: 2023/12/11 18:31:47 llama.go:444: error starting llama runner: llama runner process has terminated
Ollama correctly identifies all 4 GPUs with a collective VRAM of 39320 MB VRAM available, loading up to 101 GPU layers (first line of the logs).
And then it proceeds to load the layers seemingly successfully, but then somehow an OOM error is triggered.
How can I manually change the number of layers loaded to the GPU to debug this issue?
See https://github.com/jmorganca/ollama/issues/618#issuecomment-1737547046
The num_gpu parameter solved the problem for me. On my machine (only 12G), ollama loaded 43 layers and failed with the same error as above, but runs smooth with 40 layers (didn't try with 41 and 42, though)
Likely a bug that was introduced into the later versions. Try 0.1.11 version.
IIRC llama.cpp only allocates the context on a single GPU. With large contexts this messes up calculation of layer splits. Not sure what a work around would be.
@easp For llama.cpp, there's the --tensor-split flag, to work around this issue by allocating to the "main" GPU less tensor layers so that more VRAM can be reserved for the context. Either allow that to be passed into ollama (currently not supported), or be smart about estimating context + layer size (since there's already a heuristic for estimating how many layers will fit) and perform that split accordingly.
Likely a bug that was introduced into the later versions. Try 0.1.11 version.
How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors.
Likely a bug that was introduced into the later versions. Try 0.1.11 version.
How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors.
https://github.com/jmorganca/ollama/releases/tag/v0.1.11
Leave a reply afterwords if it works.
Likely a bug that was introduced into the later versions. Try 0.1.11 version.
How would I revert to this version? I installed Ollama over a month ago and it was running perfectly. I upgraded today and keep getting OOM errors.
https://github.com/jmorganca/ollama/releases/tag/v0.1.11
Leave a reply afterwords if it works.
I got that version installed and it's officially working again. Tested with multiple models. Thank you!
@BruceMacD Looks like at least 3 people have been able to get rid of their OOM problems by reverting back to version 0.1.11. Clearly it is a bug when loading small models into much larger VRAM and still failing, but only with versions 0.1.12+. Lots of people would love to try out Mixtral but can't because of this issue.
git clone --recursive https://github.com/jmorganca/ollama.git
cd ollama/llm/llama.cpp
vi generate_linux.go
//go:generate cmake -S ggml -B ggml/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_CUDA_FORCE_MMQ=on
//go:generate cmake --build ggml/build/cuda --target server --config Release
//go:generate mv ggml/build/cuda/bin/server ggml/build/cuda/bin/ollama-runner
//go:generate cmake -S gguf -B gguf/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_CUDA_PEER_MAX_BATCH_SIZE=0 -DLLAMA_CUDA_FORCE_MMQ=on
//go:generate cmake --build gguf/build/cuda --target server --config Release
//go:generate mv gguf/build/cuda/bin/server gguf/build/cuda/bin/ollama-runner
cd ../..
go generate ./...
go build .
git clone --recursive https://github.com/jmorganca/ollama.git cd ollama/llm/llama.cpp vi generate_linux.go//go:generate cmake -S ggml -B ggml/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_CUDA_FORCE_MMQ=on //go:generate cmake --build ggml/build/cuda --target server --config Release //go:generate mv ggml/build/cuda/bin/server ggml/build/cuda/bin/ollama-runner //go:generate cmake -S gguf -B gguf/build/cuda -DLLAMA_CUBLAS=on -DLLAMA_ACCELERATE=on -DLLAMA_K_QUANTS=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DLLAMA_CUDA_PEER_MAX_BATCH_SIZE=0 -DLLAMA_CUDA_FORCE_MMQ=on //go:generate cmake --build gguf/build/cuda --target server --config Release //go:generate mv gguf/build/cuda/bin/server gguf/build/cuda/bin/ollama-runnercd ../.. go generate ./... go build .
@phalexo Sorry, I'm not sure what this is stating? What's the relevance? Are you trying to imply these are the lines causing the OOM bug? Or something else...?
@peteygao we've made a bunch of improvements in how we do memory prediction calculations. Can you give the latest release a try (0.1.22) and see if it works properly on your setup?
I won't be able too, I got an error on the last version claiming my GPUs are too old, so I may be stuck at this version.
I won't be able too, I got an error on the last version claiming my GPUs are too old, so I may be stuck at this version.
Related issues: #1865 and #1756
Look at the docker files to see which version of Go to use. It may be your problem.
On Tue, Jan 30, 2024 at 7:05 PM Davery92 @.***> wrote:
I won't be able too, I got an error on the last version claiming my GPUs are too old, so I may be stuck at this version.
Related issues: #1865 https://github.com/ollama/ollama/issues/1865 and #1756 https://github.com/ollama/ollama/issues/1756
So I pulled the newest release and it still runs only on CPU. So I pulled the repo to build from source and every time I run go build. I get these errors: parser/parser.go:9:2: package log/slog is not in GOROOT (/usr/lib/go-1.18/src/log/slog) parser/parser.go:10:2: package slices is not in GOROOT (/usr/lib/go-1.18/src/slices)
— Reply to this email directly, view it on GitHub https://github.com/ollama/ollama/issues/1465#issuecomment-1918120375, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABDD3ZN637YJTCO3FHWKI53YRGDDVAVCNFSM6AAAAABAPQ5EXWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJYGEZDAMZXGU . You are receiving this because you were mentioned.Message ID: @.***>
I deleted my comment because I'm stupid and had an old version of go but fixed it and mixtral works!! Across both my gpu's!! Except ollama serve locks up after roughly 8 messages. The api stops excepting and I can't even execute ollama run {model}
Happy to hear you got it working @Davery92 but sad you hit a hang/crash. Can you share the server logs? If there's not much in them, setting OLLAMA_DEBUG=1 might yield more insight into the nature of the hang.
Happy to hear you got it working @Davery92 but sad you hit a hang/crash. Can you share the server logs? If there's not much in them, setting
OLLAMA_DEBUG=1might yield more insight into the nature of the hang.
Sure I can try that when I get home, however I had ollama serve open this morning while I was chatting and it was just showing the api post after each successful generation, then it would just nothing. My api call would go nowhere, running ollama run would just sit and spin and when I tried to close the ollama server it would hang until I killed the PID. There's no error or anything it just like freezes.
If you're still seeing OOMs or hangs, please give the latest release a try and let us know and we'll re-open the issue.
https://github.com/ollama/ollama/releases
I have 4 GPUs with 12.2GiB VRAM and 1 GPU with 4GiB, all five have 5.2 architecture. I used to be able to use all five, and ollama was smart to not put more than 4GiB on the last GPU.
Now it causes an error, so I can't use the 5th GPU anymore.
On Thu, May 2, 2024 at 5:25 PM Daniel Hiltgen @.***> wrote:
If you're still seeing OOMs or hangs, please give the latest release a try and let us know and we'll re-open the issue.
https://github.com/ollama/ollama/releases
— Reply to this email directly, view it on GitHub https://github.com/ollama/ollama/issues/1465#issuecomment-2091700270, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABDD3ZPKZFFJQUUEYOOKQ4LZAKVLTAVCNFSM6AAAAABAPQ5EXWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJRG4YDAMRXGA . You are receiving this because you were mentioned.Message ID: @.***>
@phalexo sorry to hear that. It seems like this is a new issue not related to the original problem from this issue. Can you file a new issue and include the server log with OLLAMA_DEBUG=1 set so we can see exactly what the scheduler and memory prediction algorithms are doing any why it's exceeding the VRAM on your smaller GPU.