bottlerocket icon indicating copy to clipboard operation
bottlerocket copied to clipboard

Starting or stopping the admin container breaks `nvidia-smi` in one of my running containers

Open modelbitjason opened this issue 1 year ago • 7 comments

I'm using bottlerocket-nvidia on ECS.

Very recently logging into my ec2 hosts via SSM then entering the admin container causes nvidia-smi to stop functioning.

Other containers that also access the GPU on that host are fine.

I'm not telling ECS about my GPU usage, instead I changed the default runtime to nvidia and pass in the relevant env vars. If I tell ECS about the GPU, it won't let me gracefully roll a new instance on that host.

Any ideas on what could have changed or how I can debug? This also happens on 1.19.0 -- I only tried 1.19.2 before filing this report, I can reproduce it there as well,

Running docker inspect produces the exact same output (except failing health checks once nvidia-smi stops working)

Image I'm using:

amazon/bottlerocket-aws-ecs-2-nvidia-x86_64-v1.19.2-29cc92cc

What I expected to happen: I expect that nvidia-smi returns some information about the instance gfx cards.

bash-5.1# docker exec 694 nvidia-smi
Thu Mar 28 18:08:54 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07             Driver Version: 535.161.07   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla T4                       Off | 00000000:00:1E.0 Off |                    0 |
| N/A   29C    P0              26W /  70W |    109MiB / 15360MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
+---------------------------------------------------------------------------------------+

What actually happened:

bash-5.1# docker exec 694 nvidia-smi
Failed to initialize NVML: Unknown Error

How to reproduce the problem:

Log into a instance, enter-admin-container, sheltie

 docker run -d --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=all \
 nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"  

Then, exit, disable-admin-container, enter-admin-container

docker logs <container> -- Should see

GPU 0: Tesla T4 (UUID: GPU-0dff5832-17bd-b243-7f0f-95988d33ba5a)
Failed to initialize NVML: Unknown Error
Failed to initialize NVML: Unknown Error

modelbitjason avatar Mar 28 '24 18:03 modelbitjason

I can reproduce this, launched a g3.8x instance with bottlerocket-aws-ecs-2-nvidia-x86_64-v1.19.2-29cc92cc and followed the repro steps, docker logs <container> gives me:

GPU 0: Tesla M60 (UUID: GPU-963a4569-ebb4-d876-14a7-c3c8491f8682)
GPU 1: Tesla M60 (UUID: GPU-cf45c479-5e3d-5316-42b5-1301bc4f4f6a)
Failed to initialize NVML: Unknown Error
Failed to initialize NVML: Unknown Error

Because the instance I was using had 2 GPUs, I also tried exposing each of the GPUs to two different containers:

docker run -d --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 -e NVIDIA_DRIVER_CAPABILITIES=all \
 nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"
docker run -d --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1 -e NVIDIA_DRIVER_CAPABILITIES=all \
 nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"

The result was the same for both containers, but it showed a new error message in the logs:

GPU 0: Tesla M60 (UUID: GPU-963a4569-ebb4-d876-14a7-c3c8491f8682)
Unable to determine the device handle for gpu 0000:00:1D.0: Unknown Error
Failed to initialize NVML: Unknown Error
Failed to initialize NVML: Unknown Error

sam-berning avatar Mar 29 '24 18:03 sam-berning

Hi @modelbitjason , thanks for reporting this.

I just want to add some context on why the steps you follow trigger this failure.

When you enable-admin-container/disable-admin-container, a systemctl command will be issued to reload containerd's configurations so that the admin container is started/stopped. Whenever this happens, systemd will undo any cgroups modifications done by libnvidia-container while creating the containers and granting them access to the GPUs. This is a know issue, and the solution that was given was to run nvidia-ctk whenever the GPUs are loaded. We already do that today in all Bottlerocket variants. There seems to be another fix in newer versions of libnvidia-container, however when we tried to update to v1.14.X, it broke the ECS aarch64 variant. I'll ask my coworkers to give that new fix a spin, and check if the problem persists (after we figure out why the new libnvidia-container version broke the ECS aarch64 variant).

That said, could you please expand on what do you mean with this?

If I tell ECS about the GPU, it won't let me gracefully roll a new instance on that host.

Are you having problems with task deployments? Or are you trying to "over-subscribe" the node so that you can run multiple tasks in the same host and share the one GPU?

arnaldo2792 avatar Apr 03 '24 03:04 arnaldo2792

Thanks for the explanation @arnaldo2792 -- I saw some other tickets about cgroups but those fixes didn't help. This explains why.

re: ECS + GPUs, yes, but more specifically, I need to have the old instance hand-off data to the new one and do a graceful shutdown. I only have one ECS-managed task per EC2 instance (except during rollout).

When I roll a new task version, the newly started version finds the existing container, asks for some state, then tells it to drain + eventually shutdown. This way I can both be sure the new version comes up healthy before the old version on that host is removed.

This service manages some long-lived containers on the host and proxies requests to them. The old version passes control of the containers to the newly started version, but needs to stick around until all the outstanding requests it is currently proxying are complete.

modelbitjason avatar Apr 03 '24 09:04 modelbitjason

@modelbitjason, I'm sorry for the very late response.

When I roll a new task version, the newly started version finds the existing container, asks for some state, then tells it to drain + eventually shutdown.

Just so that I have the full picture of your architecture, when/why do you use the admin container? Do you that to check if the old container is done draining?

arnaldo2792 avatar May 30 '24 18:05 arnaldo2792

Thanks @arnaldo2792 -- These days I only use the admin containers to debug. Previously it was also used to manually run docker prune to free up space.

We've had other cases where the GPU goes away, so I log in to try and see what happened. One or two times it seemed permanent and I just cycled to a new instance.

The current workaround is to have the admin container start at boot via settings. I'm just worried about other things triggering this behavior somehow.

modelbitjason avatar Jun 03 '24 08:06 modelbitjason

Previously it was also used to manually run docker prune to free up space.

There are options in ECS to free up space, :+1: I can give you pointers to the ones we support if you need them.

The current workaround is to have the admin container start at boot via settings.

Do you do enable the admin container on boot only to debug instances? Or in all your instances, regardless if you will debug in them?

I'm just worried about other things triggering this behavior somehow.

The only path that I've seen that triggers this behavior is the systemctl daemon-reload command, and the NVIDIA folks mentioned that moving towards CDI could help with this problem. I'll check with the ECS folks if they will plan to support CDI soon.

arnaldo2792 avatar Jun 03 '24 15:06 arnaldo2792

Previously it was also used to manually run docker prune to free up space.

There are options in ECS to free up space, 👍 I can give you pointers to the ones we support if you need them.

We already have the docker socket mapped in, so doing it from our control plane based on disk usage. We only use ECS to run the main control plane, that container then starts other containers directly via docket socket.

ECS is helpful for the networking and stuff, but we don't have enough control over placement and it's sometimes too slow to start tasks. So this partial usage has been pretty good, especially for tasks that don't need to have their own ENIs.

The current workaround is to have the admin container start at boot via settings.

Do you do enable the admin container on boot only to debug instances? Or in all your instances, regardless if you will debug in them?

Well we do it for all of them on boot since we don't know when we need to debug. Previously, we'd only create the admin instance when needed. Our system is still pretty new and we run into weird bugs like the GPU going away, so we'll need the ability to debug for the foreseeable future.

I'm just worried about other things triggering this behavior somehow.

The only path that I've seen that triggers this behavior is the systemctl daemon-reload command, and the NVIDIA folks mentioned that moving towards CDI could help with this problem. I'll check with the ECS folks if they will plan to support CDI soon.

That's reassuring to hear! We haven't noticed any problems since having the container start at boot. We use it rarely, it's a 'break glass' measure.

modelbitjason avatar Jun 03 '24 15:06 modelbitjason

This was fixed starting with Bottlerocket 1.40.x:

# In the host:
systemctl daemon-reload

I still have access to the GPU:

bash-5.2# nvidia-smi
Wed Jul 23 22:35:08 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.148.08             Driver Version: 570.148.08     CUDA Version: 12.9     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Tesla T4                       On  |   00000000:00:1E.0 Off |                    0 |
| N/A   32C    P8             13W /   70W |       0MiB /  15360MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
bash-5.2# ./samples/
UnifiedMemoryPerf        deviceQuery              globalToShmemAsyncCopy   immaTensorCoreGemm       reductionMultiBlockCG    shfl_scan                simpleAWBarrier          simpleAtomicIntrinsics   simpleVoteIntrinsics     vectorAdd                warpAggregatedAtomicsCG
bash-5.2# find /samples/ -type f -exec {}
find: missing argument to `-exec'
bash-5.2# find /samples/ -type f -exec {} \;
GPU Device 0: "Turing" with compute capability 7.5

Running ........................................................

Overall Time For matrixMultiplyPerf

Printing Average of 20 measurements in (ms)
Size_KB  UMhint UMhntAs  UMeasy   0Copy MemCopy CpAsync CpHpglk CpPglAs
4         0.175   0.205   0.339   0.019   0.033   0.028   0.037   0.028
16        0.193   0.246   0.449   0.041   0.062   0.058   0.065   0.063
64        0.321   0.377   0.842   0.125   0.166   0.158   0.131   0.123
256       1.064   0.837   1.309   0.685   0.604   0.559   0.455   0.448
1024      3.259   3.431   4.043   4.830   2.442   2.255   1.820   1.816
4096     13.106  13.458  14.598  35.998  10.177  10.395   9.162   9.171
16384    59.804  62.008  68.144 278.751  50.942  50.737  47.445  47.479

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
/samples/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Tesla T4"
  CUDA Driver Version / Runtime Version          12.9 / 12.9
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 14914 MBytes (15638134784 bytes)
  (040) Multiprocessors, (064) CUDA Cores/MP:    2560 CUDA Cores
  GPU Max Clock rate:                            1590 MHz (1.59 GHz)
  Memory Clock rate:                             5001 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 4194304 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        65536 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 30
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.9, CUDA Runtime Version = 12.9, NumDevs = 1
Result = PASS
[globalToShmemAsyncCopy] - Starting...
GPU Device 0: "Turing" with compute capability 7.5

MatrixA(1280,1280), MatrixB(1280,1280)
Running kernel = 0 - AsyncCopyMultiStageLargeChunk
Computing result using CUDA Kernel...
done
Performance= 904.60 GFlop/s, Time= 4.637 msec, Size= 4194304000 Ops, WorkgroupSize= 256 threads/block
Checking computed result for correctness: Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Initializing...
GPU Device 0: "Turing" with compute capability 7.5

M: 4096 (16 x 256)
N: 4096 (16 x 256)
K: 4096 (16 x 256)
Preparing data for GPU...
Required shared memory size: 64 Kb
Computing... using high performance kernel compute_gemm_imma
Time: 4.796416 ms
TOPS: 28.65
reductionMultiBlockCG Starting...

GPU Device 0: "Turing" with compute capability 7.5

33554432 elements
numThreads: 1024
numBlocks: 40

Launching SinglePass Multi Block Cooperative Groups kernel
Average time: 0.961299 ms
Bandwidth:    139.621156 GB/s

GPU result = 1.992401361465
CPU result = 1.992401361465
Starting shfl_scan
GPU Device 0: "Turing" with compute capability 7.5

> Detected Compute SM 7.5 hardware with 40 multi-processors
Starting shfl_scan
GPU Device 0: "Turing" with compute capability 7.5

> Detected Compute SM 7.5 hardware with 40 multi-processors
Computing Simple Sum test
---------------------------------------------------
Initialize test data [1, 1, 1...]
Scan summation for 65536 elements, 256 partial sums
Partial summing 256 elements with 1 blocks of size 256
Test Sum: 65536
Time (ms): 0.289088
65536 elements scanned in 0.289088 ms -> 226.699142 MegaElements/s
CPU verify result diff (GPUvsCPU) = 0
CPU sum (naive) took 0.176060 ms

Computing Integral Image Test on size 1920 x 1080 synthetic data
---------------------------------------------------
Method: Fast  Time (GPU Timer): 0.053568 ms Diff = 0
Method: Vertical Scan  Time (GPU Timer): 0.154720 ms
CheckSum: 2073600, (expect 1920x1080=2073600)
/samples/simpleAWBarrier starting...
GPU Device 0: "Turing" with compute capability 7.5

Launching normVecByDotProductAWBarrier kernel with numBlocks = 40 blockSize = 1024
Result = PASSED
/samples/simpleAWBarrier completed, returned OK
simpleAtomicIntrinsics starting...
GPU Device 0: "Turing" with compute capability 7.5

Processing time: 1.375000 (ms)
simpleAtomicIntrinsics completed, returned OK
[simpleVoteIntrinsics]
GPU Device 0: "Turing" with compute capability 7.5

> GPU device has 40 Multi-Processors, SM 7.5 compute capabilities

[VOTE Kernel Test 1/3]
        Running <<Vote.Any>> kernel1 ...
        OK

[VOTE Kernel Test 2/3]
        Running <<Vote.All>> kernel2 ...
        OK

[VOTE Kernel Test 3/3]
        Running <<Vote.Any>> kernel3 ...
        OK
        Shutting down...
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done
GPU Device 0: "Turing" with compute capability 7.5

CPU max matches GPU max

Warp Aggregated Atomics PASSED
bash-5.2#

arnaldo2792 avatar Jul 23 '25 22:07 arnaldo2792