cuda-nvcc missing again
It seems that the problem detected and solved in issue https://github.com/pangeo-data/pangeo-docker-images/issues/387 has resurfaced again. I think this happened after #435 was merged.
The problem:
There is a ptxas based error that shows up. Can be easily reproduced as:
from jax import random
random.PRNGkey(0)
gives the error that
2023-02-01 19:08:39.849007: W external/org_tensorflow/tensorflow/compiler/xla/stream_executor/gpu/asm_compiler.cc:85] Couldn't get ptxas version string: INTERNAL: Couldn't invoke ptxas --version
2023-02-01 19:08:39.849939: F external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:454] ptxas returned an error during compilation of ptx to sass: 'INTERNAL: Failed to launch ptxas' If the error message indicates that a file could not be written, please verify that sufficient filesystem space is provided.
Aborted
During the last discussion, @ngam had asked to check what version of cuda-nvcc existed. When I check this
conda list | grep cuda-nvcc
This returns nothing, showing that there is no cuda-nvcc in the tensorflow/jax based ml-notebook.
Installing cuda-nvcc by using mamba install cuda-nvcc==11.6.* -c nvidia solves the problem.
However, it would be good if the user did not have to manually do this installation, and the docker image was properly setup.
@dhruvbalwada I thought it was removed intentionally b/c no longer needed? See conversation here https://github.com/pangeo-data/pangeo-docker-images/pull/398 ...
Maybe @yuvipanda or @ngam or @weiji14 can chip in about why the problem has resurfaced?
It’s a complicated issue with all sorts of stuff. I think for now the best thing is to keep it out and let the user find a resolution. This is generally a tricky problem with, and mismatches are bound to happen.
The good news is that cuda-nvcc is coming to conda-forge soon; the bad news is that it’ll be a while before the lengthy migration effort concludes.
Xref:
- https://github.com/conda-forge/staged-recipes/issues/21382
- https://github.com/conda-forge/staged-recipes/pull/21350#discussion_r1096481469
Btw, thanks @dhruvbalwada for keeping an eye on this, and for the detailed report :)
Small update: This is finally getting resolved... hopefully very soon! xref https://github.com/pangeo-data/pangeo-docker-images/issues/450
Looks like cuda-nvcc is now on conda-forge - https://github.com/conda-forge/cuda-nvcc-feedstock. Is it better to install in directly in the ml-notebook image, or wait for the ML libraries like Tensorflow/Jax to depend on cuda-nvcc directly first? I see some mention of it e.g. at https://github.com/conda-forge/tensorflow-feedstock/issues/296#issuecomment-1428039041.
We should likely wait. I am still trying to assess how best to migrate Jax and TensorFlow to the new packaging format. We in a bit of a bind here... with volunteer maintainers occupied with other tasks... but tensorflow 2.12 is very close and I am making small progress on jaxlib.
Someone reported on the forum at https://discourse.pangeo.io/t/how-to-run-code-using-gpu-on-pangeo-saying-libdevice-not-found-at-libdevice-10-bc/3672 about missing cuda-nvcc and XLA_FLAGS causing issues. Can we revisit adding cuda-nvcc to the docker image again, if the matter is resolved on conda-forge @ngam? @yuvipanda mentioned that 2i2c doesn't use the old K80 GPUs anymore, so we don't need to worry about backward compatibility if it helps.
Quick note to say that jaxlib-0.4.23-cuda120py* actually has an explicit runtime dependency on cuda-nvcc now (see https://github.com/conda-forge/jaxlib-feedstock/pull/241), but we'll need some more updates on tensorflow to resolve an incompatibility with libabseil versions. See https://github.com/pangeo-data/pangeo-docker-images/pull/549#issuecomment-2123468406, and keep an eye on https://github.com/conda-forge/tensorflow-feedstock/pull/385.
Once those PRs are merged, users shouldn't have to install cuda-nvcc manually anymore, as they should be installed directly with jaxlib.
@dhruvbalwada Try my/b-data's CUDA-enabled JupyterLab Python docker stack
On the host
:exclamation: NVIDIA Driver v555.42.02 required
docker run --gpus all --rm -ti glcr.b-data.ch/jupyterlab/cuda/python/base bash
==========
== CUDA ==
==========
CUDA Version 12.5.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
=============
== JUPYTER ==
=============
Entered start.sh with args: bash
Running hooks in: /usr/local/bin/start-notebook.d as uid: 1000 gid: 100
Sourcing shell script: /usr/local/bin/start-notebook.d/10-populate.sh
Done running hooks in: /usr/local/bin/start-notebook.d
Running hooks in: /usr/local/bin/before-notebook.d as uid: 1000 gid: 100
Sourcing shell script: /usr/local/bin/before-notebook.d/10-env.sh
Sourcing shell script: /usr/local/bin/before-notebook.d/11-home.sh
Sourcing shell script: /usr/local/bin/before-notebook.d/30-code-server.sh
Sourcing shell script: /usr/local/bin/before-notebook.d/90-limits.sh
Done running hooks in: /usr/local/bin/before-notebook.d
Executing the command: bash
In the container
pip install "jax[cuda12_local]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
Defaulting to user installation because normal site-packages is not writeable
Looking in links: https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
Collecting jax[cuda12_local]
Downloading jax-0.4.28-py3-none-any.whl.metadata (23 kB)
Collecting ml-dtypes>=0.2.0 (from jax[cuda12_local])
Downloading ml_dtypes-0.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (20 kB)
Collecting numpy>=1.22 (from jax[cuda12_local])
Downloading numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.0/61.0 kB 3.9 MB/s eta 0:00:00
Collecting opt-einsum (from jax[cuda12_local])
Downloading opt_einsum-3.3.0-py3-none-any.whl.metadata (6.5 kB)
Collecting scipy>=1.9 (from jax[cuda12_local])
Downloading scipy-1.13.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.6/60.6 kB 7.9 MB/s eta 0:00:00
Collecting jaxlib==0.4.28+cuda12.cudnn89 (from jax[cuda12_local])
Downloading https://storage.googleapis.com/jax-releases/cuda12/jaxlib-0.4.28%2Bcuda12.cudnn89-cp312-cp312-manylinux2014_x86_64.whl (143.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.7/143.7 MB 7.7 MB/s eta 0:00:00
Downloading ml_dtypes-0.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 11.4 MB/s eta 0:00:00
Downloading numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.0/18.0 MB 31.4 MB/s eta 0:00:00
Downloading scipy-1.13.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.2/38.2 MB 21.7 MB/s eta 0:00:00
Downloading jax-0.4.28-py3-none-any.whl (1.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9/1.9 MB 25.4 MB/s eta 0:00:00
Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.5/65.5 kB 4.2 MB/s eta 0:00:00
Installing collected packages: numpy, scipy, opt-einsum, ml-dtypes, jaxlib, jax
Successfully installed jax-0.4.28 jaxlib-0.4.28+cuda12.cudnn89 ml-dtypes-0.4.0 numpy-1.26.4 opt-einsum-3.3.0 scipy-1.13.1
python
Python 3.12.3 (main, Apr 9 2024, 18:09:17) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import jax
>>> jax.random.PRNGKey(0)
Array([0, 0], dtype=uint32)
>>> jax.devices()
[cuda(id=0)]
>>>
What makes my/b-data's images different:
- Multi-arch:
linux/amd64,linux/arm64/v8 - Derived from
nvidia/cuda:12.5.0-devel-ubuntu22.04- including development libraries and headers
- TensortRT and TensorRT plugin libraries
- including development libraries and headers
- IDE: code-server next to JupyterLab
- Just Python – no Conda / Mamba
ℹ️ For further explanations, see https://github.com/iot-salzburg/gpu-jupyter/issues/123#issuecomment-1845908014 ff.
@dhruvbalwada Or you could use
docker run --gpus all --rm -ti glcr.b-data.ch/jupyterlab/python/base bash
which does not have a CUDA Toolkit pre-installed. And then
pip install "jax[cuda12]" jaxlib
which brings its own CUDA libraries.
Final note: Using pip, the above also works with the official python:3.12 image.
BTW This issue is resolved with the latest ml-notebook image, i.e. pangeo/ml-notebook:2024.06.02.
Thanks @benz0li for noticting! Yes, it look like we are using the cuda build of jaxlib now:
https://github.com/pangeo-data/pangeo-docker-images/blob/8be5af2b06a7f7228c64ea292ef99661db4448b5/ml-notebook/conda-lock.yml#L4476
which pulled in cuda-nvcc=12.5.0 as a dependency:
https://github.com/pangeo-data/pangeo-docker-images/blob/8be5af2b06a7f7228c64ea292ef99661db4448b5/ml-notebook/conda-lock.yml#L1820-L1833
I'll refactor #549 to update the cuda-nvcc note on the main README.md, and then we can close this issue :tada:
What makes my/b-data's images different:
1. Multi-arch: `linux/amd64`, `linux/arm64/v8` 2. Derived from [`nvidia/cuda:12.5.0-devel-ubuntu22.04`](https://hub.docker.com/r/nvidia/cuda/tags?page=1&name=12.5.0-devel-ubuntu22.04) * including development libraries and headers
@benz0li, this is very impressive work, and I'd love to continue the discussion somewhere, maybe on #345 where I've been thinking about building on top of miniforge-cuda as the base image (which is build on top of the nvidia/cuda docker images). I'll definitely keep an eye on https://gitlab.com/b-data/jupyterlab/python/docker-stack though!