ollama
ollama copied to clipboard
Add support for Nvidia Jetson
I believe Ollama is a great project, I have tried different ideas to try get Ollama to utilise the GPU, but still uses CPU. I have currently flashed Jetpack 6 DP onto the AGX ORIN Dev Kit. I believe this jetpack version will help Ollama use the GPU easier, if you are able to add support for it.
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:08:11_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
nvidia-smi
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.2.0 Driver Version: N/A CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Orin (nvgpu) N/A | N/A N/A | N/A |
| N/A N/A N/A N/A / N/A | Not Supported | N/A N/A |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
Thank you
Just echoing the above issue. I've attempted to run the docker container for ollama. Running the docker with this parameter (as instructed):
--gpus=all
does not work. Per the above user's comment, JetPack, CUDA is all available but only CPU processing works with the container.
I've tried this docker parameter invocation and this doesn't work either:
docker run --runtime nvidia ...
Thank you
+1
This is by no means solved yet but I'm now monitoring this issue you may want to follow too
https://github.com/ollama/ollama/issues/1979
On my Jetson Xavier AGX with jetpack 5.1, Till Version 0.1.17 Ollama worked fine on GPU. But now on the new Jetson Orin AGX it is even not possible to install a specific version (0.1.17) with this commando: curl -fsSL https://ollama.com/install.sh | sed 's#https://ollama.com/download#https://github.com/jmorganca/ollama/releases/download/v0.1.17#' | sh
it allways installs the actual version (0.1.25)
Something was changed after 0.1.17 so the GPU is not seen anymore bei ollama
@telemetrieTP23 Look here https://github.com/ollama/ollama/issues/1979
Doesn't work on Jetson Nano with Jetpack 4.6 meaning GPU is not used after using instructions provided on https://github.com/ollama/ollama/blob/main/docs/tutorials/nvidia-jetson.md
@klimchuk Not sure if the fix will support Jetpack 4.6 (will def work with 5.1.x), but check and read here: https://github.com/ollama/ollama/issues/1979
Yes this Jetson Nano devices with 4Gb of RAM are capable to run pretty large amount of models, from BERT to 7B LLMs with quantization. It's pretty sad to see that single board computer launched in 2019 nearly useless to run language related models.
edit: after few days of research it looks like Nvidia Jetson Nano GPUs ancient architecture and ancient toolchain provided by Nvidia rendering it's nearly impossible to run language related models on them. You basically can't even use pytorch > 1.10 on it.
This should now be fixed with merge of #2279
Hey thanks everyone so much! So just want to confirm the merge is complete and if I update Ollama, Jetson GPU should now be supported, would that be the same for the docker image or should I just run the installer to save more headache, once again thank you everyone!
@MrDelusionAI I have not done anything with containers yet. I’m still digging through dusty-nv’s container resources to figure it out, I have been concentrating on getting the binary to work on bare metal. I don’t think containers work yet due to how quirky containers are on Jetson devices with GPU support.
If you pull the repo and compile it, that binary should work on your Jetson. I think Jetson support will be in their next binary release (0.30?). Keep checking their releases if you don’t want to self-compile.
@MrDelusionAI If you want to build your own Ollama container to run as a service on a Jetson device, please see this. I tested it on my Jetson Orin Nano 8gb running L4T r35.4.1. Please let me know if you have any issues.
@MrDelusionAI If you want to build your own Ollama container to run as a service on a Jetson device, please see this. I tested it on my Jetson Orin Nano 8gb running L4T r35.4.1. Please let me know if you have any issues.
Oh great thanks, I will try both the binary when its pushed into the main version and container as service from your link. Im running Jetpack 6 so will follow your guidance.
Thanks for everyones efforts!
The pre-release for 0.1.30 is available now, and contains @remy415's change. I don't have a Jetson yet so I can't validate the build, but folks should give it a spin and let us know how it goes.
https://github.com/ollama/ollama/releases/tag/v0.1.30-rc4
I copied the binary from the 0.1.30-rc4 container and it had some issues running. I did notice you pushing ARM changes so I’ll try again when the container is updated. I haven’t tried pulling the binary directly, I will do so when I get home.
I’m also replicating the ARM build workflow in the centos containers, I’ll report back when I have an update.
Awesome, it looks promising. Just tried the rc4 arm binary this morning. It seems the GPU is detected, CUDA is bound and offloading works. Though, after running a model, the process stucks in a long-running loop (high CPU load). No prompt served. After several minutes it crashes. 2024.03.27-ollama-jetson6.log
@dhiltgen I copied the syntax from your workflow for ARM (cuda centos container + commands). It compiled on my Jetson, found the GPU, and then crashed similarly to what was reported above. I have a hunch it may be related to how the two OSs compile the binary (centos vs ubuntu 20.04). I turned on as much debugging as I could and ran a binary compiled with the Centos container workflow vs the binary I compiled directly on my Jetson:
- Jetson
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 70.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.00 MiB
llama_new_context_with_model: graph nodes = 1060
llama_new_context_with_model: graph splits = 2
[1711553138] warming up the model with an empty run
{"function":"initialize","level":"INFO","line":422,"msg":"initializing slots","n_slots":1,"tid":"281471143109072","timestamp":1711553141}
{"function":"initialize","level":"INFO","line":431,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"281471143109072","timestamp":1711553141}
time=2024-03-27T15:25:41.243Z level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
[1711553141] llama server main loop starting
{"function":"update_slots","level":"INFO","line":1550,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"281471101694416","timestamp":1711553141}
time=2024-03-27T15:25:41.250Z level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=1 window=2048
[GIN] 2024/03/27 - 15:25:41 | 200 | 40.615734458s | 127.0.0.1 | POST "/api/chat"
- Centos:
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 70.50 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 164.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 12.00 MiB
llama_new_context_with_model: graph nodes = 1060
llama_new_context_with_model: graph splits = 2
[1711552130] warming up the model with an empty run
CUDA error: CUBLAS_STATUS_EXECUTION_FAILED
current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /opt/ollama/llm/llama.cpp/ggml-cuda.cu:10604
cublasGemmBatchedEx(ctx.cublas_handle(), CUBLAS_OP_T, CUBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), CUDA_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), CUDA_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
GGML_ASSERT: /opt/ollama/llm/llama.cpp/ggml-cuda.cu:193: !"CUDA error"
<the rest is a stack trace of the deadlocked goroutines>
I'll play around a bit with compilers and see if I can get the Centos container to compile a binary that works on the Jetson.
Another possibility is cuda version. We're trying to link against v11 to have broader support, but maybe only v12 works on these devices?
I thought about that too, but from what I could tell: CUDA toolkits are "future compatible", meaning everything that works on v11 works on v12+. CUDA drivers are "backwards compatible". I compile it on my Jetson with v11.4, and that binary should work on systems with v12.
I did notice Ubuntu 22.04 was used to compile the runtime binary, maybe it's a GCC -> nvcc thing.
I don't know enough about gcc/C compiling to make heads or tails from this, do you see anything helpful here?
Compiled natively:
tegra@ok3d-1:~/ok3d/ollama-container/dev/bintest$ ldd ollama-jetson-native
linux-vdso.so.1 (0x0000ffff80177000)
libresolv.so.2 => /lib/aarch64-linux-gnu/libresolv.so.2 (0x0000ffff80104000)
libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff800d3000)
libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff800bf000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff7ff4c000)
/lib/ld-linux-aarch64.so.1 (0x0000ffff80147000)
Downloaded from the rc-4 releases page:
tegra@ok3d-1:~/ok3d/ollama-container/dev/bintest$ ldd ollama-linux-arm64
linux-vdso.so.1 (0x0000ffffac165000)
libresolv.so.2 => /lib/aarch64-linux-gnu/libresolv.so.2 (0x0000ffffac0f2000)
libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffac0c1000)
librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffac0a9000)
libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffac095000)
libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffabeb0000)
libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffabe05000)
libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffabc92000)
/lib/ld-linux-aarch64.so.1 (0x0000ffffac135000)
libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffffabc6e000)
I'm not sure if it will work, but you can try setting LD_LIBRARY_PATH to include the path to the cuda libs before starting ollama and see if it picks up the v12 library. (some minor code changes might be required to get this fully sorted out though)
The latest Jetpack release supports CUDA Toolkit 12, I haven’t had time to flash my devices as had a beta release this month and I haven’t checked if it’s gone live yet. Jetpack 5 doesn’t support CUDA 12, and the underlying OS (ie Linux headers), the Nv driver, CUDA toolkit, etc are all static and not able to be upgraded. Jetpack 6 is supposed to change this, maybe that’s all I need to do is upgrade. I just checked and JP6 is still in developer preview.
Regarding your idea @dhiltgen on including the LD_LIBRARY_PATH, tried that before. It finds the cuda12 lib, but seems to prefer the packaged one (cuda11).
source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama578764547/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140 /usr/local/cuda/targets/aarch64-linux/lib/libcudart.so.12.2.140 /usr/local/cuda-12/targets/aarch64-linux/lib/libcudart.so.12.2.140 /usr/local/cuda-12.2/targets/aarch64-linux/lib/libcudart.so.12.2.140]"
Building ollama natively on Jetpack 6 DP (Cuda 12) by following the generate/build workflow did work. It finds and packages the correct cuda libs to the binary. The resulting one binary works as expected.
@dhiltgen Does the workflow build container for ARM64 have to be Centos/Rocky based or can you use the ubuntu 20.04 one?
When I built the binary using the Centos container, I had the same issue as the downloaded binary. When I used nvidia/cuda:11.3.1-devel-ubuntu20.04
, the resulting binary worked on my bare OS. To get it to run in a container properly, I had to use a dusty-nv container as my runtime, I used dustynv/build-essential:r35.4.1
as it's one of the smaller containers I could find at a miniscule 5Gb. I tried using nvidia/cuda:11.3.1-runtime-ubuntu20.04
and the 11.4.3-runtime-ubuntu20.04 containers, neither of them worked for runtime.
Example dockerfile and dependency script here (note that I cloned the ollama repo into the folder I built the container in so that I didn't have to git clone inside the container build)
Great to hear building from source does still work. So we just need to figure out how to get the official builds working.
Does the workflow build container for ARM64 have to be Centos/Rocky based or can you use the ubuntu 20.04 one?
The problem is glibc versions. Ubuntu generally tends to be more up-to-date, but that means Go binaries you compile on that wont work on older distros. We try to compile on an older base to maximize compatibility of the resulting pre-built binaries. Once we can figure out what the right combination is, we may have to synthesize the arm cuda container base image and tools instead of relying on official nvidia ones hosted on docker hub.
The Ubuntu 20.04 on Jetpack 5 has gcc 10.5 (shown as compatible with gcc 9.6) and is running glibc 2.31. Not sure what versions the CUDA 11-3 Ubuntu container is running but it’s likely comparable or close. Would that work for this purpose?
Added note: Couldn't find a CUDA Centos 7 ARM64 container (AMD64 only). nvidia/cuda:11.3.1-devel-rockylinux8 runs GCC 8.5.0, glibc 2.28
@remy415 I just got a Jetson Orin, so I'm able to test now. What I'm seeing is a hang during model load. I tried compiling with a few different cuda versions, but none worked (v12's reported the Jetson's driver being too old - my setup has v11.4). I was able to get it running with a little live surgery: Set LD_LIBRARY_PATH to include the cuda from the host, start ollama, wait for it to extract the runners, manually remove all the cuda libraries rm /tmp/ollama*/runners/cuda*/libcu*
and then try to load a model, and it winds up linking to the host cuda library not our bundled version, and then it runs on the GPU. So the build and linking is producing a working executable, we're just bundling a cuda library that for some reason won't work properly on the Jetson systems. I'm not sure yet what the optimal fix is, but I'll explore alternative container base images to try to see if we can find one that balances our desire for old glibc with one that actually works on Jetsons. If I can't find one, then maybe we'll need to make some code changes to be able to use the host cuda libs in some(?) cases.
I'm checking into glibc version compatibility, and also looking at the output of readelf for the various binaries I've collected.
Also, kinda weird but your production binary is ~100mb smaller than the one I compile on the Jetson (even considering I don't compile for CPU and I don't have ROCM builds)
I tried installing an updated toolkit (CUDA-TOOLKIT-11-4 in Rockylinux8) and got this error when trying to run:
CUDA error: CUBLAS_STATUS_NOT_SUPPORTED
current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:10604
cublasGemmBatchedEx(
ctx.cublas_handle(),
CUBLAS_OP_T,
CUBLAS_OP_N,
ne01,
ne11,
ne10,
alpha,
(const void **) (ptrs_src.get() + 0*ne23),
CUDA_R_16F,
nb01/nb00,
(const void **) (ptrs_src.get() + 1*ne23),
CUDA_R_16F,
nb11/nb10,
beta,
( void **) (ptrs_dst.get() + 0*ne23),
cu_data_type,
ne01,
ne23,
cu_compute_type,
CUBLAS_GEMM_DEFAULT_TENSOR_OP
)
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:193: !"CUDA error"
Still digging through the CUDA error, not finding anything promising; guessing it's a toolkit version mismatch issue.
Adding note to previous comment: between one minor version upgrade, there is additional ~230Mb of cuda library lol
libcublas 11.5.1.109 -> 11.6.6.84 added 42Mb libcublasLt 11.5.1.109 -> 11.6.6.84 added 128Mb libcudart 11.3.109 -> 11.4.298 added 61Mb Total increase: 231Mb
- nvidia/cuda:11.3.1-devel-rockylinux8
-rwxr-xr-x 1 root root 126456824 May 4 2021 /usr/local/cuda/lib64/libcublas.so.11.5.1.109
-rwxr-xr-x 1 root root 245372792 May 4 2021 /usr/local/cuda/lib64/libcublasLt.so.11.5.1.109
-rwxr-xr-x 1 root root 638136 May 4 2021 /usr/local/cuda/lib64/libcudart.so.11.3.109
- Jetpack 5 (ubuntu 20.04, l4t 35.4.1)
-rw-r--r-- 1 root root 168574840 Sep 19 2022 /usr/local/cuda/lib64/libcublas.so.11.6.6.84
-rw-r--r-- 1 root root 373884448 Sep 19 2022 /usr/local/cuda/lib64/libcublasLt.so.11.6.6.84
-rw-r--r-- 1 root root 699488 Sep 14 2022 /usr/local/cuda/lib64/libcudart.so.11.4.298