ollama
ollama copied to clipboard
Ollama crashes on Llava on windows after passing image path OOM
Ollama crashes when tried with this for llava
What's in this image? C:\Users\test\Downloads\pexels-oleksandr-p-321552.jpg
Hi there, would it be possible to share your machine specs? Thanks so much!
If you have the an image handy as well that causes the crash, that will help us debug.
Systeminfo: ystem Type: x64-based PC OS Name: Microsoft Windows 10 Pro for Workstations OS Version: 10.0.19045 N/A Build 19045 Processor(s): 2 Processor(s) Installed. [01]: Intel64 Family 6 Model 85 Stepping 7 GenuineIntel ~2295 Mhz [02]: Intel64 Family 6 Model 85 Stepping 7 GenuineIntel ~2295 Mhz Windows Directory: C:\Windows System Directory: C:\Windows\system32 Boot Device: \Device\HarddiskVolume2 Total Physical Memory: 270,039 MB Available Physical Memory: 254,649 MB Virtual Memory: Max Size: 308,951 MB Virtual Memory: Available: 292,091 MB Virtual Memory: In Use: 16,860 MB
nvidia-smi: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 528.89 Driver Version: 528.89 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro RTX 4000 WDDM | 00000000:2D:00.0 On | N/A | | 30% 36C P8 10W / 125W | 371MiB / 8192MiB | 6% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
link to image: https://c4.wallpaperflare.com/wallpaper/269/758/332/funny-cats-wallpaper-preview.jpg
Command: PS C:\Users\test> ollama run llava:34b
Whats in this image ? C:\Users\test\Downloads\funny-cats-wallpaper-preview.jpg Added image 'C:\Users\test\Downloads\funny-cats-wallpaper-preview.jpg' Error: Post "http://127.0.0.1:11434/api/chat": read tcp 127.0.0.1:64261->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host.
Server Log: [GIN] 2024/02/16 - 12:55:51 | 200 | 14.3987587s | 127.0.0.1 | POST "/api/chat" [1708106151] all slots are idle and system prompt is empty, clear the KV cache time=2024-02-16T12:56:03.377-05:00 level=DEBUG source=prompt.go:175 msg="prompt now fits in context window" required=796 window=2048 time=2024-02-16T12:56:03.377-05:00 level=DEBUG source=routes.go:1205 msg="chat handler" prompt="<|im_start|>system\n<|im_end|>\n<|im_start|>user\nWhats in this image ? [img-0]<|im_end|>\n<|im_start|>assistant\n" images=1 time=2024-02-16T12:56:03.377-05:00 level=INFO source=dyn_ext_server.go:166 msg="loaded 1 images" [1708106163] slot 0 - loaded image [1708106163] slot 0 is processing [task id: 0] [1708106163] slot 0 : kv cache rm - [0, end) [1708106163] slot 0 - encoding image [id: 0] CUDA error: out of memory current device: 0, in function ggml_cuda_pool_malloc_vmm at C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:7834 cuMemSetAccess(g_cuda_pool_addr[device] + g_cuda_pool_size[device], reserve_size, &access, 1) GGML_ASSERT: C:\Users\jeff\git\ollama\llm\llama.cpp\ggml-cuda.cu:241: !"CUDA error" clip_model_load: model name: openai/clip-vit-large-patch14-336 clip_model_load: description: image encoder for LLaVA clip_model_load: GGUF version: 3 clip_model_load: alignment: 32 clip_model_load: n_tensors: 377 clip_model_load: n_kv: 19 clip_model_load: ftype: f16
clip_model_load: loaded meta data with 19 key-value pairs and 377 tensors from C:\Users\test.ollama\models\blobs\sha256-83720bd8438ccdc910deba5efbdc3340820b29258d94a7a60d1addc9a1b5f095 clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output. clip_model_load: - kv 0: general.architecture str = clip clip_model_load: - kv 1: clip.has_text_encoder bool = false clip_model_load: - kv 2: clip.has_vision_encoder bool = true clip_model_load: - kv 3: clip.has_llava_projector bool = true clip_model_load: - kv 4: general.file_type u32 = 1 clip_model_load: - kv 5: general.name str = openai/clip-vit-large-patch14-336 clip_model_load: - kv 6: general.description str = image encoder for LLaVA clip_model_load: - kv 7: clip.projector_type str = mlp clip_model_load: - kv 8: clip.vision.image_size u32 = 336 clip_model_load: - kv 9: clip.vision.patch_size u32 = 14 clip_model_load: - kv 10: clip.vision.embedding_length u32 = 1024 clip_model_load: - kv 11: clip.vision.feed_forward_length u32 = 4096 clip_model_load: - kv 12: clip.vision.projection_dim u32 = 768 clip_model_load: - kv 13: clip.vision.attention.head_count u32 = 16 clip_model_load: - kv 14: clip.vision.attention.layer_norm_epsilon f32 = 0.000010 clip_model_load: - kv 15: clip.vision.block_count u32 = 23 clip_model_load: - kv 16: clip.vision.image_mean arr[f32,3] = [0.481455, 0.457828, 0.408211] clip_model_load: - kv 17: clip.vision.image_std arr[f32,3] = [0.268630, 0.261303, 0.275777] clip_model_load: - kv 18: clip.use_gelu bool = false clip_model_load: - type f32: 235 tensors clip_model_load: - type f16: 142 tensors clip_model_load: CLIP using CUDA backend clip_model_load: text_encoder: 0 clip_model_load: vision_encoder: 1 clip_model_load: llava_projector: 1 clip_model_load: model size: 667.51 MB clip_model_load: metadata size: 0.14 MB clip_model_load: params backend buffer size = 667.51 MB (377 tensors) clip_model_load: compute allocated memory: 33.75 MB
did anyone manage to find a solution to this bug?
thanks
cc @jmorganca @dhiltgen
Unfortunately it looks like our memory prediction algorithm didn't work correctly for this setup and we attempted to load too many layers into the GPU and it ran out of VRAM. We're continuing to improve our calculations to avoid this.
In the next release (0.1.29) we'll be adding a new setting that can allow you to set a lower VRAM setting to workaround this type of crash until we get the prediction logic fixed. OLLAMA_MAX_VRAM=<bytes>
For example, I believe your GPUs is an 8G card, so you could start with 7G and experiment until you find a setting that loads the as many layers as possible without hitting the OOM crash. OLLAMA_MAX_VRAM=7516192768
Hi this should be improved now – memory estimation for image Llava models is much more accurate – please let me know if you're still seeing OOM errors.