stable-diffusion-webui-docker icon indicating copy to clipboard operation
stable-diffusion-webui-docker copied to clipboard

error during container init: unable to apply apparmor profile

Open alpha754293 opened this issue 1 year ago • 1 comments

Has this issue been opened before?

  • [X] It is not in the FAQ, I checked.
  • [X] It is not in the issues, I searched.

Describe the bug I am using an Ubuntu 22.04 LTS LXC container via Proxmox 7.4-17.

The Nvidia GPU has been successfully passed through to said LXC container.

I was also able to successfully install the Nvidia Container Toolkit and ran the sample workload with one slight modification to the command from Nvidia's website:

$ sudo docker run --rm --runtime=nvidia --security-opt apparmor:unconfined --gpus all ubuntu nvidia-smi

Where I had to add in the --security-opt apparmor:unconfined flag to be able to get that going.

My <<CTID>>.conf looks like this:

lxc.apparmor.profile: unconfined
arch: amd64
cores: 8
features: mount=nfs,nesting=1
hostname: nvidia-ai
memory: 16384
net0: *snip*
net1: *snip*
ostype: ubuntu
rootfs: local-lvm:vm-4237-disk-0,size=128G
swap: 512
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 506:* rwm
lxc.cgroup.devices.allow: c 226:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvram dev/nvram none bind,optional,create=file
ubuntu@nvidia-ai:~/stable-diffusion-webui-docker$ sudo docker compose --profile download up --build
[sudo] password for ubuntu:
WARN[0000] /home/ubuntu/stable-diffusion-webui-docker/docker-compose.yml: `version` is obsolete
[+] Building 0.8s (6/8)                                                                                            docker:default
 => [download internal] load build definition from Dockerfile                                                                0.0s
[+] Building 0.8s (6/8)                                                                                            docker:default
 => [download internal] load build definition from Dockerfile                                                                0.0s
 => => transferring dockerfile: 185B                                                                                         0.0s
 => [download internal] load metadata for docker.io/library/bash:alpine3.19                                                  0.4s
 => [download internal] load .dockerignore                                                                                   0.0s
 => => transferring context: 2B                                                                                              0.0s
 => CACHED [download 1/4] FROM docker.io/library/bash:alpine3.19@sha256:5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa  0.0s
 => [download internal] load build context                                                                                   0.0s
 => => transferring context: 128B                                                                                            0.0s
 => ERROR [download 2/4] RUN apk update && apk add parallel aria2                                                            0.4s
------
 > [download 2/4] RUN apk update && apk add parallel aria2:
0.248 runc run failed: unable to start container process: error during container init: unable to apply apparmor profile: apparmor failed to apply profile: write /proc/self/attr/apparmor/exec: no such file or directory
------
failed to solve: process "/bin/sh -c apk update && apk add parallel aria2" did not complete successfully: exit code: 1

Which UI

N/A

Hardware / Software

  • OS: Ubuntu 22.04 LTS LXC Container on Promox 7.4-17
  • OS version: see above
  • WSL version (if applicable): N/A
  • Docker Version:
Client: Docker Engine - Community
 Version:           26.0.1
 API version:       1.45
 Go version:        go1.21.9
 Git commit:        d260a54
 Built:             Thu Apr 11 10:53:21 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          26.0.1
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.9
  Git commit:       60b9add
  Built:            Thu Apr 11 10:53:21 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.31
  GitCommit:        e377cd56a71523140ca6ae87e30244719194a521
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
  • Docker compose version:
Docker Compose version v2.26.1
  • Repo version: should be from master as I just ran git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git
  • RAM: 16 GB
  • GPU/VRAM: 6 GB

Steps to Reproduce

  1. In Promox 7.4-17, create Ubuntu 22.04 LTS privileged LXC container.
  2. Pass the Nvidia GPU (in my case, I have a RTX A2000 6 GB GPU) card from the host to the LXC container by editing the <<CTID>>.conf file as shown above.
  3. Install the Nvidia driver with the --no-kernel-modules flag, inside the container.
  4. Install the Nvidia Container Toolkit via the link that's provided in the FAQ. (https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
  5. Install Docker CE and Docker compose via the auto install script (https://github.com/docker/docker-install)
  6. Run sudo docker compose --profile download up --build
  7. Get error message.

Additional context Here is the output of nvidia-smi:

Sun Apr 14 23:43:00 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67                 Driver Version: 550.67         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX A2000               Off |   00000000:81:00.0 Off |                  Off |
| 30%   29C    P8              5W /   70W |     102MiB /   6138MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+

alpha754293 avatar Apr 14 '24 23:04 alpha754293

@alpha754293 This might just need the same solution as the one described at https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/703

bean5 avatar Jul 17 '24 05:07 bean5