gpu-operator icon indicating copy to clipboard operation
gpu-operator copied to clipboard

Failed to initialize NVML: Unknown Error

Open hoangtnm opened this issue 2 years ago • 28 comments

The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.

1. Quick Debug Checklist

  • [ ] Are you running on an Ubuntu 18.04 node?
  • [x] Are you running Kubernetes v1.13+?
  • [x] Are you running Docker (>= 18.06) or CRIO (>= 1.13+)?
  • [ ] Do you have i2c_core and ipmi_msghandler loaded on the nodes?
  • [ ] Did you apply the CRD (kubectl describe clusterpolicies --all-namespaces)

1. Issue or feature description

Hi, I'm deploying Kubeflow v1.6.1 along with nvidia/gpu-operator for training DL models. It works great, but after a random of time (maybe 1-2 days I guess), I cannot use nvidia-smi to check GPU status anymore. When this happens, it raises:

(base) jovyan@agm-0:~/vol-1$ nvidia-smi
Failed to initialize NVML: Unknown Error

I'm not so sure why this happens because it runs training without any problem for several epochs, and when I come back the next day, this error happens. Do you have any idea?

2. Steps to reproduce the issue

This is how I deploy nvidia/gpu-operator:

sudo snap install helm --classic
helm repo add nvidia https://nvidia.github.io/gpu-operator \
  && helm repo update \
  && helm install \
  --version=v22.9.0 \
  --generate-name \
  --create-namespace \
  --namespace=gpu-operator-resources \
  nvidia/gpu-operator \
  --set driver.enabled=false \
  --set devicePlugin.env[0].name=DEVICE_LIST_STRATEGY \
  --set devicePlugin.env[0].value="volume-mounts" \
  --set toolkit.env[0].name=ACCEPT_NVIDIA_VISIBLE_DEVICES_ENVVAR_WHEN_UNPRIVILEGED \
  --set-string toolkit.env[0].value=false \
  --set toolkit.env[1].name=ACCEPT_NVIDIA_VISIBLE_DEVICES_AS_VOLUME_MOUNTS \
  --set-string toolkit.env[1].value=true

hoangtnm avatar Nov 01 '22 02:11 hoangtnm

@hoangtnm Can you confirm the OS version you are using along with runtime(containerd, docker) version? Also, is cgroup v2 enabled on the nodes? (i.e systemd.unified_cgroup_hierarchy=1 kernel command line is passed and /sys/fs/cgroup/cgroup.controllers exists?)

shivamerla avatar Nov 01 '22 23:11 shivamerla

@shivamerla I'm using Ubuntu 22.04.1 LTS and docker, this is my docker daemon's config along with its version :

docker-ce-cli/jammy,now 5:20.10.20~3-0~ubuntu-jammy amd64 [installed,upgradable to: 5:20.10.21~3-0~ubuntu-jammy]
docker-ce-rootless-extras/jammy,now 5:20.10.20~3-0~ubuntu-jammy amd64 [installed,upgradable to: 5:20.10.21~3-0~ubuntu-jammy]
docker-ce/jammy,now 5:20.10.20~3-0~ubuntu-jammy amd64 [installed,upgradable to: 5:20.10.21~3-0~ubuntu-jammy]
docker-compose-plugin/jammy,now 2.12.0~ubuntu-jammy amd64 [installed,upgradable to: 2.12.2~ubuntu-jammy]
docker-scan-plugin/jammy,now 0.17.0~ubuntu-jammy amd64 [installed,upgradable to: 0.21.0~ubuntu-jammy]

{
    "default-runtime": "nvidia",
    "exec-opts": [
        "native.cgroupdriver=systemd"
    ],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "/usr/local/nvidia/toolkit/nvidia-container-runtime"
        },
        "nvidia-experimental": {
            "args": [],
            "path": "/usr/local/nvidia/toolkit/nvidia-container-runtime-experimental"
        }
    },
    "storage-driver": "overlay2"
}

Btw, I don't think cgroup v2 is configured on my system. I only installed fresh Ubuntu, docker with the mentioned config and then deployed gpu-operator.

hoangtnm avatar Nov 02 '22 02:11 hoangtnm

The default in Ubuntu 22.04 is cgroupv2. Just to confirm though, can you show us the contents of this folder:

/sys/fs/cgroup/

klueska avatar Nov 02 '22 13:11 klueska

I have cgroupv2 on Ubuntu 22.04 and have the same problem. Is that so that cgroupv2 is not supported here?

xhejtman avatar Dec 04 '22 01:12 xhejtman

I tried to set systemd.unified_cgroup_hierarchy=0. But it is the same. I guess it can be related to SystemdCgroup = true in containerd config.toml?

xhejtman avatar Dec 04 '22 01:12 xhejtman

containderd version is v1.6.6-k3s1 (rke2), kubernetes 1.24.8.

xhejtman avatar Dec 04 '22 10:12 xhejtman

I checked when upgrading 1.24.2 to 1.24.8, I got this error. Versions later than 1.24.2 require SystemdCgroup = true which seems to be incompatible with nvidia toolkit. I tried both 1.11.0 and 22.9.0 operator versions.

Whole runtime configuration:

      [plugins.cri.containerd.runtimes]

        [plugins.cri.containerd.runtimes.nvidia]
          runtime_type = "io.containerd.runc.v2"

          [plugins.cri.containerd.runtimes.nvidia.options]
            BinaryName = "/usr/local/nvidia/toolkit/nvidia-container-runtime"
            Runtime = "/usr/local/nvidia/toolkit/nvidia-container-runtime"
            SystemdCgroup = true

        [plugins.cri.containerd.runtimes.nvidia-experimental]
          runtime_type = "io.containerd.runc.v2"

          [plugins.cri.containerd.runtimes.nvidia-experimental.options]
            BinaryName = "/usr/local/nvidia/toolkit/nvidia-container-runtime-experimental"
            Runtime = "/usr/local/nvidia/toolkit/nvidia-container-runtime-experimental"
            SystemdCgroup = true

        [plugins.cri.containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"

          [plugins.cri.containerd.runtimes.runc.options]
            SystemdCgroup = true

any chance for quick fix?

The problem is, that all /dev/nvidia* in containers are not accessible:

cat /dev/nvidiactl 
cat: /dev/nvidiactl: Operation not permitted

xhejtman avatar Dec 04 '22 12:12 xhejtman

All this is happening because systemd removes device from cgroup as it needs to be set via systemd, not directly into file.

xhejtman avatar Dec 05 '22 10:12 xhejtman

I found out that this is connected with static cpu-manager-policy. If both SystemdCgroup and static cpu-manager-policy is used then access rights to device are removed and GPU is unusable. May be related to https://github.com/NVIDIA/gpu-operator/issues/455

xhejtman avatar Dec 05 '22 12:12 xhejtman

The issue with CPUManager compatibility is a well known one that had a (until recently) stable workaround, using the --compatWithCPUManager option to the device plugin helm chart (or more spcifically the --pass-device-specs flag directly to the plugin binary). Please see https://github.com/NVIDIA/nvidia-docker/issues/966 for a discussion about why this is an issue and this workaround.

Unfortunately, it seems that recent combinations of systemd / containerd / runc do not allow this workaround to work anymore. As mentioned in the link above, the underlying issue is due to a flaw in the design of the existing nvidia-container-stack, and not something that is easily worked around.

We have been working on a redesign of the nvidia-container-stack (based on CDI) for a few years now that architects this problem away, but it is not yet enabled by default. For many uses cases, it is already a stable / better solution than what is provided today, but it does not have full feature parity with the existing stack yet, which is why we can't just make the switch.

That said, for most (possibly all) GPU operator uses cases it should have feature parity though, and we plan on switching to this new approach as the default in the the next couple of releases (likely the March release).

In the meantime, I will see if we can slip in an option for the next operator release (coming out in 2 weeks) to at least provide the ability to enable CDI as the default mechanism for device injection so that those of you facing this problem at least have a way out of it.

klueska avatar Dec 05 '22 12:12 klueska

Thank you for explanation! I can also disable static manager for a while, but I will also test pass-device-specs to see whether it works.

xhejtman avatar Dec 05 '22 13:12 xhejtman

@xhejtman Just out of curiosity, can you try to apply the following to see it if resolves your issue: https://github.com/NVIDIA/nvidia-container-toolkit/issues/251

We would like to understand if creating these symlinks in a system that exhibits the issue is enough to work around the issue.

klueska avatar Dec 05 '22 16:12 klueska

It seems it works as well.

xhejtman avatar Dec 05 '22 18:12 xhejtman

Btw, does it work correctly, if you have multiple cards in system and request only one?

xhejtman avatar Dec 05 '22 19:12 xhejtman

However, with latest nvidia driver (520), there are no /dev/nvidia* nodes on the host, so workaround with ln -s is not applicable. Version 520 is required for H100 card.

xhejtman avatar Dec 05 '22 23:12 xhejtman

What do you mean there are no /dev/nvidia* nodes on the host? Nothing has changed in thAt regard with respect to the driver. That said is has never been the case that these nodes get created by the driver itself (due to GPL limitations). They typically get created in one of three ways:

  1. Running nvidia-smi on the host once the driver installation has completed (which will create all device nodes)
  2. Manually running nvidia-modprobe telling it which specific device nodes to create
  3. Relying on the nvidia container stack (and libnvidia-container specifically) to create them for you before injecting them into a container

Based on this bug, 3 won't work anymore, but 1 and 2 still should.

klueska avatar Dec 06 '22 00:12 klueska

root@kub-b10:~# ls /dev/nvid*
ls: cannot access '/dev/nvid*': No such file or directory
root@kub-b10:~#
root@kub-b10:~# chroot /run/nvidia/driver
root@kub-b10:/# ls /dev/nvid*
/dev/nvidia-modeset  /dev/nvidia-uvm  /dev/nvidia-uvm-tools  /dev/nvidia0  /dev/nvidiactl

/dev/nvidia-caps:
nvidia-cap1  nvidia-cap2
root@kub-b10:/# 

So I mean, that /dev/nviida node files are in /run chroot only and in container. I do not see them in the host /dev.

But that's my bad, they are actually missing also with older driver, I thought they are created by loading the nvidia module.

xhejtman avatar Dec 06 '22 00:12 xhejtman

@xhejtman This is the expected behavior with driver container root under /run/nvidia/driver. If driver is directly installed on the node, then we would see /dev/nvidia* device nodes.

shivamerla avatar Dec 06 '22 00:12 shivamerla

May be related to NVIDIA/nvidia-docker#455

I am definitely using SystemdCgroup and static cpu-manager-policy FWIW

benlsheets avatar Dec 12 '22 16:12 benlsheets

Could you see if manually creating the /dev/char devices as described here helps to resolve your issue: https://github.com/NVIDIA/nvidia-container-toolkit/issues/251

Regardless of whether you are running with the driver container or not, these char devices will need to be created in the root /dev/char folder.

klueska avatar Dec 12 '22 16:12 klueska

Should it be fixed in 22.9.1 version?

xhejtman avatar Dec 18 '22 11:12 xhejtman

No, unfortunately, not.

klueska avatar Jan 03 '23 10:01 klueska

@klueska I followed https://github.com/NVIDIA/nvidia-docker/issues/1671, and the conclusion seems to be that gpu-operator won't be compatible with runc version newer than 1.1.3 (containerd version newer 1.6.7).

This Failed to initialize NVML: Unknown Error will happen even if cpuManager is not set ( at least, this is our case ).

I think this issue should definitely be added to known issue in release note, otherwise people who upgrade their containerd version in production will face detrimental consquence....

we10710aa avatar Jan 10 '23 13:01 we10710aa

I was able to reproduce this and verify that manually creating symlinks to the various nvidia devices in /dev/char resolves the issue. I need to talk to our driver team to determine why these are not automatically created and how to get them created going forward.

At least we seem to fully understand the problem now, and know what is necessary to resolve it. In the meantime, I would recommend creating these symlinks manually to work around this issue.

klueska avatar Jan 13 '23 11:01 klueska

We just released GPU Operator 22.9.2 which contains a workaround for this issue. After the driver is installed, we create the symlinks under '/dev/char' pointing to all NVIDIA character devices.

@hoangtnm @xhejtman would you be able to verify 22.9.2 resolves this issue?

cdesiniotis avatar Feb 02 '23 04:02 cdesiniotis

Hello, it seems that char dev symlink does not solve this issue with MIG devices, nvidia-smi complains about:

517971 openat(AT_FDCWD, "/proc/driver/nvidia/capabilities/gpu0/mig/gi13/access", O_RDONLY) = -1 ENOENT (No such file or directory)

should it be fixed in the newer versions?

xhejtman avatar May 02 '23 07:05 xhejtman

https://github.com/opencontainers/runc/commit/bf7492ee5d022cd99a9dbe71c5c4f965041552e9 升级runc解决

wangzhipeng avatar Jun 05 '23 09:06 wangzhipeng

set device-plugin param PASS_DEVICE_SPECS to true。

zlianzhuang avatar Apr 28 '24 07:04 zlianzhuang