ollama icon indicating copy to clipboard operation
ollama copied to clipboard

Minimal use of GPU in Docker (windows)

Open sumitsodhi88 opened this issue 1 year ago • 2 comments

my GPU is being used 23% while cpu is at 100% while using a docker image in windows environment.

sumitsodhi88 avatar Jan 18 '24 01:01 sumitsodhi88

Your GPU is waiting for your CPU, which is the result of the model not fitting in VRAM.

easp avatar Jan 18 '24 02:01 easp

@sumitsodhi88 if you're still having this problem, can you share the server log?

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues

dhiltgen avatar Jan 27 '24 00:01 dhiltgen

2024-01-27 07:11:32 2024/01/27 01:41:32 images.go:808: total blobs: 5 2024-01-27 07:11:32 2024/01/27 01:41:32 images.go:815: total unused blobs removed: 0 2024-01-27 07:11:32 2024/01/27 01:41:32 routes.go:930: Listening on [::]:11434 (version 0.1.20) 2024-01-27 07:11:36 2024/01/27 01:41:36 shim_ext_server.go:142: Dynamic LLM variants [cuda] 2024-01-27 07:11:36 2024/01/27 01:41:36 gpu.go:88: Detecting GPU type 2024-01-27 07:11:36 2024/01/27 01:41:36 gpu.go:203: Searching for GPU management library libnvidia-ml.so 2024-01-27 07:11:36 2024/01/27 01:41:36 gpu.go:248: Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1] 2024-01-27 07:11:36 2024/01/27 01:41:36 gpu.go:94: Nvidia GPU detected 2024-01-27 07:11:36 2024/01/27 01:41:36 gpu.go:135: CUDA Compute Capability detected: 6.1 2024-01-27 07:12:31 2024/01/27 01:42:31 gpu.go:135: CUDA Compute Capability detected: 6.1 2024-01-27 07:12:31 2024/01/27 01:42:31 gpu.go:135: CUDA Compute Capability detected: 6.1 2024-01-27 07:12:31 2024/01/27 01:42:31 shim_ext_server_linux.go:24: Updating PATH to /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/ollama3888797067/cuda 2024-01-27 07:12:31 2024/01/27 01:42:31 shim_ext_server.go:92: Loading Dynamic Shim llm server: /tmp/ollama3888797067/cuda/libext_server.so 2024-01-27 07:12:31 2024/01/27 01:42:31 ext_server_common.go:136: Initializing internal llama server 2024-01-27 07:12:32 ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no 2024-01-27 07:12:32 ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes 2024-01-27 07:12:32 ggml_init_cublas: found 1 CUDA devices: 2024-01-27 07:12:32 Device 0: NVIDIA GeForce GTX 1050, compute capability 6.1 2024-01-27 07:12:33 llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest)) 2024-01-27 07:12:33 llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 4096, 32000, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 1: blk.0.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 2: blk.0.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 3: blk.0.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 4: blk.0.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 5: blk.0.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 6: blk.0.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 7: blk.0.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 8: blk.0.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 9: blk.0.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 10: blk.1.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 11: blk.1.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 12: blk.1.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 13: blk.1.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 14: blk.1.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 15: blk.1.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 16: blk.1.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 17: blk.1.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 18: blk.1.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 19: blk.10.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 20: blk.10.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 21: blk.10.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 22: blk.10.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 23: blk.10.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 24: blk.10.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 25: blk.2.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 26: blk.2.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 27: blk.2.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 28: blk.2.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 29: blk.2.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 30: blk.2.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 31: blk.2.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 32: blk.2.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 33: blk.2.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 34: blk.3.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 35: blk.3.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 36: blk.3.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 37: blk.3.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 38: blk.3.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 39: blk.3.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 40: blk.3.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 41: blk.3.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 42: blk.3.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 43: blk.4.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 44: blk.4.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 45: blk.4.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 46: blk.4.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 47: blk.4.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 48: blk.4.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 49: blk.4.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 50: blk.4.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 51: blk.4.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 52: blk.5.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 53: blk.5.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 54: blk.5.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 55: blk.5.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 56: blk.5.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 57: blk.5.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 58: blk.5.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 59: blk.5.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 60: blk.5.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 61: blk.6.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 62: blk.6.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 63: blk.6.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 64: blk.6.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 65: blk.6.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 66: blk.6.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 67: blk.6.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 68: blk.6.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 69: blk.6.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 70: blk.7.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 71: blk.7.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 72: blk.7.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 73: blk.7.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 74: blk.7.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 75: blk.7.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 76: blk.7.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 77: blk.7.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 78: blk.7.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 79: blk.8.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 80: blk.8.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 81: blk.8.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 82: blk.8.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 83: blk.8.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 84: blk.8.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 85: blk.8.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 86: blk.8.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 87: blk.8.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 88: blk.9.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 89: blk.9.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 90: blk.9.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 91: blk.9.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 92: blk.9.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 93: blk.9.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 94: blk.9.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 95: blk.9.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 96: blk.9.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 97: blk.10.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 98: blk.10.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 99: blk.10.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 100: blk.11.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 101: blk.11.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 102: blk.11.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 103: blk.11.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 104: blk.11.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 105: blk.11.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 106: blk.11.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 107: blk.11.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 108: blk.11.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 109: blk.12.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 110: blk.12.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 111: blk.12.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 112: blk.12.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 113: blk.12.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 114: blk.12.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 115: blk.12.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 116: blk.12.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 117: blk.12.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 118: blk.13.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 119: blk.13.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 120: blk.13.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 121: blk.13.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 122: blk.13.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 123: blk.13.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 124: blk.13.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 125: blk.13.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 126: blk.13.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 127: blk.14.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 128: blk.14.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 129: blk.14.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 130: blk.14.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 131: blk.14.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 132: blk.14.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 133: blk.14.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 134: blk.14.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 135: blk.14.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 136: blk.15.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 137: blk.15.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 138: blk.15.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 139: blk.15.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 140: blk.15.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 141: blk.15.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 142: blk.15.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 143: blk.15.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 144: blk.15.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 145: blk.16.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 146: blk.16.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 147: blk.16.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 148: blk.16.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 149: blk.16.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 150: blk.16.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 151: blk.16.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 152: blk.16.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 153: blk.16.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 154: blk.17.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 155: blk.17.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 156: blk.17.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 157: blk.17.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 158: blk.17.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 159: blk.17.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 160: blk.17.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 161: blk.17.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 162: blk.17.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 163: blk.18.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 164: blk.18.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 165: blk.18.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 166: blk.18.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 167: blk.18.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 168: blk.18.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 169: blk.18.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 170: blk.18.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 171: blk.18.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 172: blk.19.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 173: blk.19.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 174: blk.19.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 175: blk.19.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 176: blk.19.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 177: blk.19.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 178: blk.19.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 179: blk.19.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 180: blk.19.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 181: blk.20.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 182: blk.20.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 183: blk.20.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 184: blk.20.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 185: blk.20.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 186: blk.20.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 187: blk.20.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 188: blk.20.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 189: blk.20.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 190: blk.21.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 191: blk.21.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 192: blk.21.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 193: blk.21.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 194: blk.21.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 195: blk.21.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 196: blk.21.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 197: blk.21.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 198: blk.21.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 199: blk.22.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 200: blk.22.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 201: blk.22.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 202: blk.22.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 203: output.weight q6_K [ 4096, 32000, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 204: blk.22.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 205: blk.22.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 206: blk.22.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 207: blk.22.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 208: blk.22.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 209: blk.23.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 210: blk.23.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 211: blk.23.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 212: blk.23.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 213: blk.23.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 214: blk.23.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 215: blk.23.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 216: blk.23.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 217: blk.23.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 218: blk.24.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 219: blk.24.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 220: blk.24.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 221: blk.24.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 222: blk.24.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 223: blk.24.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 224: blk.24.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 225: blk.24.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 226: blk.24.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 227: blk.25.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 228: blk.25.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 229: blk.25.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 230: blk.25.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 231: blk.25.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 232: blk.25.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 233: blk.25.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 234: blk.25.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 235: blk.25.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 236: blk.26.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 237: blk.26.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 238: blk.26.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 239: blk.26.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 240: blk.26.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 241: blk.26.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 242: blk.26.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 243: blk.26.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 244: blk.26.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 245: blk.27.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 246: blk.27.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 247: blk.27.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 248: blk.27.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 249: blk.27.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 250: blk.27.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 251: blk.27.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 252: blk.27.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 253: blk.27.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 254: blk.28.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 255: blk.28.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 256: blk.28.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 257: blk.28.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 258: blk.28.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 259: blk.28.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 260: blk.28.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 261: blk.28.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 262: blk.28.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 263: blk.29.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 264: blk.29.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 265: blk.29.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 266: blk.29.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 267: blk.29.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 268: blk.29.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 269: blk.29.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 270: blk.29.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 271: blk.29.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 272: blk.30.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 273: blk.30.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 274: blk.30.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 275: blk.30.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 276: blk.30.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 277: blk.30.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 278: blk.30.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 279: blk.30.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 280: blk.30.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 281: blk.31.attn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 282: blk.31.ffn_down.weight q4_0 [ 14336, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 283: blk.31.ffn_gate.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 284: blk.31.ffn_up.weight q4_0 [ 4096, 14336, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 285: blk.31.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 286: blk.31.attn_k.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 287: blk.31.attn_output.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 288: blk.31.attn_q.weight q4_0 [ 4096, 4096, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 289: blk.31.attn_v.weight q4_0 [ 4096, 1024, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: - tensor 290: output_norm.weight f32 [ 4096, 1, 1, 1 ] 2024-01-27 07:12:33 llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 2024-01-27 07:12:33 llama_model_loader: - kv 0: general.architecture str = llama 2024-01-27 07:12:33 llama_model_loader: - kv 1: general.name str = mistralai 2024-01-27 07:12:33 llama_model_loader: - kv 2: llama.context_length u32 = 32768 2024-01-27 07:12:33 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 2024-01-27 07:12:33 llama_model_loader: - kv 4: llama.block_count u32 = 32 2024-01-27 07:12:33 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 2024-01-27 07:12:33 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 2024-01-27 07:12:33 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 2024-01-27 07:12:33 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 2024-01-27 07:12:33 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 2024-01-27 07:12:33 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 2024-01-27 07:12:33 llama_model_loader: - kv 11: general.file_type u32 = 2 2024-01-27 07:12:33 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama 2024-01-27 07:12:33 llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<... 2024-01-27 07:12:33 llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... 2024-01-27 07:12:33 llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... 2024-01-27 07:12:33 llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... 2024-01-27 07:12:33 llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1 2024-01-27 07:12:33 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2 2024-01-27 07:12:33 llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0 2024-01-27 07:12:33 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true 2024-01-27 07:12:33 llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false 2024-01-27 07:12:33 llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... 2024-01-27 07:12:33 llama_model_loader: - kv 23: general.quantization_version u32 = 2 2024-01-27 07:12:33 llama_model_loader: - type f32: 65 tensors 2024-01-27 07:12:33 llama_model_loader: - type q4_0: 225 tensors 2024-01-27 07:12:33 llama_model_loader: - type q6_K: 1 tensors 2024-01-27 07:12:33 llm_load_vocab: special tokens definition check successful ( 259/32000 ). 2024-01-27 07:12:33 llm_load_print_meta: format = GGUF V3 (latest) 2024-01-27 07:12:33 llm_load_print_meta: arch = llama 2024-01-27 07:12:33 llm_load_print_meta: vocab type = SPM 2024-01-27 07:12:33 llm_load_print_meta: n_vocab = 32000 2024-01-27 07:12:33 llm_load_print_meta: n_merges = 0 2024-01-27 07:12:33 llm_load_print_meta: n_ctx_train = 32768 2024-01-27 07:12:33 llm_load_print_meta: n_embd = 4096 2024-01-27 07:12:33 llm_load_print_meta: n_head = 32 2024-01-27 07:12:33 llm_load_print_meta: n_head_kv = 8 2024-01-27 07:12:33 llm_load_print_meta: n_layer = 32 2024-01-27 07:12:33 llm_load_print_meta: n_rot = 128 2024-01-27 07:12:33 llm_load_print_meta: n_gqa = 4 2024-01-27 07:12:33 llm_load_print_meta: f_norm_eps = 0.0e+00 2024-01-27 07:12:33 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 2024-01-27 07:12:33 llm_load_print_meta: f_clamp_kqv = 0.0e+00 2024-01-27 07:12:33 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 2024-01-27 07:12:33 llm_load_print_meta: n_ff = 14336 2024-01-27 07:12:33 llm_load_print_meta: n_expert = 0 2024-01-27 07:12:33 llm_load_print_meta: n_expert_used = 0 2024-01-27 07:12:33 llm_load_print_meta: rope scaling = linear 2024-01-27 07:12:33 llm_load_print_meta: freq_base_train = 1000000.0 2024-01-27 07:12:33 llm_load_print_meta: freq_scale_train = 1 2024-01-27 07:12:33 llm_load_print_meta: n_yarn_orig_ctx = 32768 2024-01-27 07:12:33 llm_load_print_meta: rope_finetuned = unknown 2024-01-27 07:12:33 llm_load_print_meta: model type = 7B 2024-01-27 07:12:33 llm_load_print_meta: model ftype = Q4_0 2024-01-27 07:12:33 llm_load_print_meta: model params = 7.24 B 2024-01-27 07:12:33 llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) 2024-01-27 07:12:33 llm_load_print_meta: general.name = mistralai 2024-01-27 07:12:33 llm_load_print_meta: BOS token = 1 '' 2024-01-27 07:12:33 llm_load_print_meta: EOS token = 2 '' 2024-01-27 07:12:33 llm_load_print_meta: UNK token = 0 '' 2024-01-27 07:12:33 llm_load_print_meta: LF token = 13 '<0x0A>' 2024-01-27 07:12:33 llm_load_tensors: ggml ctx size = 0.11 MiB 2024-01-27 07:12:33 llm_load_tensors: using CUDA for GPU acceleration 2024-01-27 07:12:33 llm_load_tensors: mem required = 2747.67 MiB 2024-01-27 07:12:33 llm_load_tensors: offloading 10 repeating layers to GPU 2024-01-27 07:12:33 llm_load_tensors: offloaded 10/33 layers to GPU 2024-01-27 07:12:33 llm_load_tensors: VRAM used: 1170.31 MiB 2024-01-27 07:12:51 ................................................................................................... 2024-01-27 07:12:51 llama_new_context_with_model: n_ctx = 2048 2024-01-27 07:12:51 llama_new_context_with_model: freq_base = 1000000.0 2024-01-27 07:12:51 llama_new_context_with_model: freq_scale = 1 2024-01-27 07:12:51 llama_kv_cache_init: VRAM kv self = 80.00 MB 2024-01-27 07:12:51 llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB 2024-01-27 07:12:51 llama_build_graph: non-view tensors processed: 676/676 2024-01-27 07:12:51 llama_new_context_with_model: compute buffer total size = 159.19 MiB 2024-01-27 07:12:52 llama_new_context_with_model: VRAM scratch buffer: 156.00 MiB 2024-01-27 07:12:52 llama_new_context_with_model: total VRAM used: 1406.32 MiB (model: 1170.31 MiB, context: 236.00 MiB) 2024-01-27 07:12:53 2024/01/27 01:42:53 ext_server_common.go:144: Starting internal llama main loop 2024-01-27 07:12:53 2024/01/27 01:42:53 ext_server_common.go:158: loaded 0 images 2024-01-27 07:13:14 [GIN] 2024/01/27 - 01:43:14 | 200 | 43.843368937s | 172.17.0.1 | POST "/api/generate" 2024-01-27 07:13:22 2024/01/27 01:43:22 ext_server_common.go:158: loaded 0 images 2024-01-27 07:14:30 [GIN] 2024/01/27 - 01:44:30 | 200 | 1m7s | 172.17.0.1 | POST "/api/generate" 2024-01-27 07:15:20 2024/01/27 01:45:20 ext_server_common.go:158: loaded 0 images 1 2

sumitsodhi88 avatar Jan 27 '24 01:01 sumitsodhi88

I am using old acer Nitro 5 gaming pc with Nvidia 1050 2gb. VRAM as a test bed before scaling. Using the Ollama api for the Anything LLM project both running in docker. 3

sumitsodhi88 avatar Jan 27 '24 02:01 sumitsodhi88

2024-01-27 07:12:33 llm_load_tensors: offloaded 10/33 layers to GPU

So roughly 1/3 of the model is loaded into GPU, and the remaining 2/3's is on your CPU, and I/O bandwidth between the two can have a significant performance impact. You can try to use a smaller model to try to get more (or ideally all) of it to fit in VRAM on your GPU, or try forcing CPU only and see if running CPU only is actually faster since it cuts out the I/O between system memory and the GPU.

https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#llm-libraries

dhiltgen avatar Jan 27 '24 15:01 dhiltgen

my gpu has small VRAM 2GB only. But my issue is its not being used fully. Without docker it load 16 layers with docker only 10. CPU only is painfully slow as CPU has other docker image loaded - anythingllm, also cpu is 7th gen i5

sumitsodhi88 avatar Jan 27 '24 16:01 sumitsodhi88

@sumitsodhi88 1050 with 2gb vram isn't going to do LLM serving very well. You'll need a model smaller than 2GB or it won't load all the layers into the GPU. The reason it isn't using all of the vram is likely because of a fixed batch size -- loading another batch would bring the vram use above the available size.

You may want to consider a new system, or try running AVX2 on CPU. It won't be as fast as GPU acceleration, but it'll run faster than base CPU.

remy415 avatar Jan 31 '24 01:01 remy415

@sumitsodhi88 we've been adjusting our memory prediction calculations quite a bit over the past few weeks, and while they're still not perfect, we're aiming to get relatively close to saturating the GPU VRAM without overshooting and causing OOMs. My suspicion on differing behavior between local and container is you're probably running different versions of the server in each. Make sure to docker pull ollama/ollama to get the latest image, and check the server logs near the beginning to confirm both your host and container are running the same version. If they're still radically different in the number of layers loaded, please share the two logs so we can see more details.

dhiltgen avatar Jan 31 '24 16:01 dhiltgen

Unfortunately with a 2GB video card there are diminishing returns as to which layers can be offloaded reliably onto the GPU.

I'm going to go ahead and close this for now. I think ollama is getting better at being able to place different layers on the GPU, but beyond a certain point it's difficult to eek out performance on older/low performance cards.

pdevine avatar Mar 11 '24 18:03 pdevine