podman icon indicating copy to clipboard operation
podman copied to clipboard

Some images result in the error: copying system image from manifest list: writing blob: adding layer with blob: processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1

Open Zivodor opened this issue 1 year ago • 11 comments

Issue Description

When attempting to create containers for some images the command fails with the error:

Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:9f16480e2ff54481cb1ea1553429bf399e8269985ab0dec5b5af6f55ea747d3f": processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create a podman-compose file (provided below)
  2. Perform podman-compose up

Describe the results you received

You can see the logs here

Describe the results you expected

Dashy should be pulled down and started successfully.

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.7
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 99.76
    systemPercent: 0.06
    userPercent: 0.18
  cpus: 8
  databaseBackend: sqlite
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  freeLocks: 2015
  hostname: project-hydra
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 6.1.0-21-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 15922044928
  memTotal: 16628264960
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.4.0-3_amd64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.4.0
    package: netavark_1.4.0-3_amd64
    path: /usr/lib/podman/netavark
    version: netavark 1.4.0
  ociRuntime:
    name: crun
    package: crun_1.8.1-1+deb12u1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20230309.7c7625d-1_amd64
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 1023406080
  swapTotal: 1023406080
  uptime: 1h 13m 15.00s (Approximately 0.04 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - ghcr.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 1
    stopped: 7
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/podman/.local/share/containers/storage
  graphRootAllocated: 196682272768
  graphRootUsed: 9006194688
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 38
  runRoot: /run/user/1001/containers
  transientStore: false
  volumePath: /home/podman/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.4
  Built: 0
  BuiltTime: Wed Dec 31 17:00:00 1969
  GitCommit: ""
  GoVersion: go1.22.1
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.4

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

I am setting up my first home server on Debian 12.5. I have updated my deps to allow me to use the latest podman and podman-compose. As a part of that process I have set myself some semi-arbitrary security rules, not for any one specific reason more so for the learning experience and to get myself immersed in resolving issues. Some of these rules (and the ones I think are the likely culprits) are:

  1. All containers must be run rootlessly, no exceptions
  2. All services must only be accessible through Wireguard VPN
  3. All services must use subuids and subgids

So far, this has been going... well. I have these services running and working well in rootless containers:

  • Wireguard
  • Dnsmasq
  • Caddy
  • Grocy
  • Monica PRM

I am able to connect to my VPN and am able to navigate to my services using the urls configured in Caddy (using self-signed certificates) and everything just works.

The next phase of this was to setup a dashboard service as I have this oldish touchscreen all-in-one PC that I plan to use as a sort of terminal in my kitchen. I looked at these possibilities, of which all of them result in the above error when I try to pull them.

  • Homepage
  • Homarr
  • Organizr
  • Heimdall
  • Dashy

When I try to create any of these, whether through podman directly or through podman-compose, it fails with the error:

Error: copying system image from manifest list: writing blob: adding layer with blob sha256:9f16480e2ff54481cb1ea1553429bf399e8269985ab0dec5b5af6f55ea747d3f": processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1

This is my compose file:

version: '3.8'

services:
  dashy:
    image: lissy93/dashy:latest
    container_name: dashy
    ports:
      - "8002:8080"
    volumes:
      - ./my-conf.yml:/app/user-data/conf.yml:Z
    restarts: unless-stopped

My subuid and subgid files look like this:

admin:100000:65536
podman:165536:65536

In every compose file I have specified a uidmap using x-podman. This has worked for everything so far. I have tried adding/removing this option from the dashy config and it did not change anything.

Zivodor avatar May 24 '24 18:05 Zivodor

podman-compose is a different repo. If you have a simple reproducer for this with straight podman that would be very helpful, otherwise this issue should be transferred to podman-compose.

rhatdan avatar May 27 '24 10:05 rhatdan

Regardless of whether I use podman or podman-compose it fails with the same error. I ran the compose with debug, extracted the command it had generated and tried running it manually and it resulted in the same error.

A full system reset for the root user and the rootless podman user did temporarily resolved the issue for me. I believe it's related to quadlets as I had created a .container file for my Wireguard container, and after disabling that I stopped running into the issue.

Zivodor avatar May 27 '24 22:05 Zivodor

I also tried just calling podman pull against the image and it resulted in the same error.

Zivodor avatar May 27 '24 22:05 Zivodor

@giuseppe PTAL

rhatdan avatar May 28 '24 13:05 rhatdan

can you share the result of:

podman unshare cat /proc/self/uid_map

does it reflect the configuration you've in etc/subuid? If not, please run podman system migrate and try again, do you still get the same output?

giuseppe avatar May 28 '24 14:05 giuseppe

podman@project-hydra:~$ podman unshare cat /proc/self/uid_map
         0       1001          1
         1     165536      65536

It is as expected. I should also note that it is not a subset of packages like I originally believed. When trying to resolve the issue I performed a podman system reset, which resolved it. After that, I enabled my wireguard.container service and tried to pull down an image that had previously worked, but it got the same error.

After I stopped the service, disabled it, then did another system reset, I was able to pull all the images successfully. As soon as I enable that service I start to get this issue persistently until I reset it. I am going to share that as well:

[Container]
AddCapability=NET_ADMIN NET_RAW
ContainerName=wireguard
Environment=SERVERURL=[Correct Local Ip] SERVERPORT=[Correct Port] PEERS=# PEERDNS=auto INTERNAL_SUBNET=10.10.0.0/24
GIDMap=0:1:50
Image=docker.io/linuxserver/wireguard
Label=io.podman.compose.config-hash=4a0e91e3ad5f9fcf67930731fbf4d771c1b5f0f38ea6c5811c12c502c1304d21 io.podman.compose.project=wireguard io.podman.compose.version=1.1.0 [email protected] com.docker.compose.project=wireguard com.docker.compose.project.working_dir=/home/podman/appdata/wireguard com.docker.compose.project.config_files=podman-compose.yml com.docker.compose.container-number=1 com.docker.compose.service=wireguard
Network=wireguard-network
PublishPort=[Correct Port]:51820/udp
Sysctl=net.ipv4.conf.all.src_valid_mark=1 net.ipv4.conf.all.forwarding=1
UIDMap=0:1:50
Volume=/home/podman/appdata/wireguard/config:/config:Z

[Service]
Restart=always

[Install]
WantedBy=default.target

Zivodor avatar May 28 '24 15:05 Zivodor

Alright, I don't think it has anything to do with my .container file. I am running into the issue with or without that file there.

Zivodor avatar May 29 '24 04:05 Zivodor

I'm fairly new to all this stuff, but at the very least I can tell you that a full podman system reset does not reliably fix it. I had to delete the /home/podman/.local/share/containers/ directory in order to resolve the issue while testing today

Zivodor avatar May 29 '24 04:05 Zivodor

I believe I am also running into the same or similar issue. I am running Fedora Server and have set up a few quadlets to run services as rootless containers. I also use UIDMap to keep the mappings across containers disjoint. Today, I was trying to update my audiobookshelf service and pull the updated image. Initially, I updated the quadlet file to use the new image, but restarting the service was failing with the processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1 error. I thought that meant I needed to update my UIDMap in some way, but I couldn't get it to work. Finally, I tried to simply pull the image and that also creates the error:

$ podman pull ghcr.io/advplyr/audiobookshelf:2.10.1
Trying to pull ghcr.io/advplyr/audiobookshelf:2.10.1...
Getting image source signatures
Copying blob 60dba4733d48 done   | 
Copying blob e376fac3bde8 done   | 
Copying blob a5edbc7b296b done   | 
Copying blob b404b3c3a52d done   | 
Copying blob d25f557d7f31 skipped: already exists  
Copying blob 549237b48d78 done   | 
Copying blob 579ced6f4ee6 done   | 
Copying blob 0f5e4b3bfe3a done   | 
Copying blob 017d1384d304 done   | 
Copying blob 6a5424a2a7f4 done   | 
Copying blob 2b7b2cbf90bf done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:a5edbc7b296b518501cd1ac08999e0e4e399c55370bbbf7b1369503bbeb8957c": processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1

I've found that this also happens on image version 2.10.0, but 2.9.0 is able to successfully pull.

rsulli55 avatar Jun 10 '24 01:06 rsulli55

Any updates on this?

Zivodor avatar Jul 19 '24 18:07 Zivodor

try dropping UIDMap=0:1:50 or adjust the size to have a bigger size, e.g. UIDMap=0:1:4096

giuseppe avatar Jul 22 '24 13:07 giuseppe

@giuseppe dropping the UIDMap does resolve the issue but means that my container is running as the user who started the process, which I do not want. Increasing the range does nothing, I have already tried that.

Zivodor avatar Aug 17 '24 23:08 Zivodor

Actually, that does not resolve the issue. I tried it again today and regardless if the UIDMap is there or not, it will fail to pull if a different image was pulled while the UIDMap was set for that container.

Zivodor avatar Aug 20 '24 00:08 Zivodor

Hi, similar problem here, I think I have managed to do a reproducible way to get this error:

$ podman create --user 996:996 --uidmap=0:9000:1000 --gidmap=0:9000:1000 docker.io/authelia/authelia:latest
Trying to pull docker.io/authelia/authelia:latest...
Getting image source signatures
Copying blob 1bf49a3a08d1 done   | 
Copying blob 279483dfc6f3 done   | 
Copying blob 43c4264eed91 done   | 
Copying blob caeb0f2503c7 done   | 
Copying blob 19271812103c done   | 
Copying blob 43f9310622ed done   | 
Copying config 1b3baf75a7 done   | 
Writing manifest to image destination
b8ec292bd52b58507ff1d016b5f33e74d9d5b8bbbedbd2db79912a34fc79bc41
$ podman pull docker.io/traefik:3
Trying to pull docker.io/library/traefik:3...
Getting image source signatures
Copying blob 43c4264eed91 skipped: already exists  
Copying blob e5f06ee63d76 done   | 
Copying blob f60fb4c0fbec done   | 
Copying blob 9a6d31097c4f done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:9a6d31097c4f3c21e02c7f6779032d04082235d9e6855929cd31c3dd61d6eef1"/""/"sha256:9a1c58574d551b4c00564ed265e328014960961e5b3119a7daf4654f8f101569": unpacking failed (error: exit status 1; output: container ID 1001 cannot be mapped to a host ID)

Some extra info:

$ podman --version
podman version 5.2.4
$ id
uid=1001(podman) gid=1001(podman) groups=1001(podman)
$ cat /etc/subuid
(…)
podman:165536:65536
$ podman unshare cat /proc/self/uid_map
         0       1001          1
         1     165536      65536

ldesgrange avatar Oct 26 '24 15:10 ldesgrange

For me to resolve this issue, i had to create a volumes directory in a user local path:

ls -al /home/user/.local/share/containers/storage/volumes
ls: cannot access '/home/user/.local/share/containers/storage/volumes': No such file or directory
mkdir /home/user/.local/share/containers/storage/volumes

mikerubicon avatar Dec 27 '24 12:12 mikerubicon

@mikerubicon

I validated that that path exists for me, I did find that the volumes were owned by container users 😦 :

drwx--x--x 17 podman podman 4096 Dec 27 14:48 .
drwx--x--x 12 podman podman 4096 Aug 23 15:15 ..
drwx------  3 173035 173035 4096 Aug 23 15:14 01705af90be67489b8429eb7214f10acce1cdcc281608681eed813d378fcddeb
drwx------  3 173035 173035 4096 Oct 14 13:54 0bf6a5486268d0e56e676e72c04b3d8a93a684799a736d93837a3d93230b0dd6
drwx------  3 176535 176535 4096 Dec 27 14:47 0e86964760675942905109f10a480383f1cfbb3af5149bef68106e1d8564cf6b
drwx------  3 176535 176535 4096 Dec 27 14:40 404b5f4c81cff00d7ed5bc7006dd564d3873536038594d5a12c72493e2aea343
drwx------  3 173035 173035 4096 Aug 23 15:14 43dffa44641b67434264bdd0a2cb17ec69330e670b890168ba57390199baabbd
drwx------  3 176535 176535 4096 Dec 27 14:45 617688a6a1e5d638015b91d6b6716fdb223dfe58af5c7351eac88d850870ae32
drwx------  3 173035 173035 4096 Oct 14 13:59 6a39fc2f27d90eefa6e3608a55b3879782aa36089039f949c539c2c4eb3d6335
drwx------  3 173035 173035 4096 Oct 14 13:54 91188a7dacbc095371edf0b6a671bf22972c14cb82130da62df53576c5d4de7a
drwx------  3 173035 173035 4096 Oct 14 13:59 9b80a4f59ed59cee907db6187a0748e1b705bee6727cb3668191b647992eef65
drwx------  3 176535 176535 4096 Dec 27 14:48 b35b0e7d2627252e3ca089bdd9c9f9dab8b884fcf06c584ce0f4c4532b5ceaa2
drwx------  3 176535 176535 4096 Sep 27 16:12 e56bd5852a4ec2ff47e8cdda35c321798857b1e488d4ed115584256c7c94c0da
drwx------  3 176535 176535 4096 Oct 14 13:55 ecb03746c8f9a0a322a57a2c8e1d8664802f4ae6834c543cedd77f5fc1c27545
drwx------  3 176535 176535 4096 Aug 23 15:17 f01b38ed96a4b8407c659503d85a3b60b29aeb91685a21cbdc47cdb28ff773be
drwx------  3 176535 176535 4096 Dec 27 14:42 f0ada11fafa65a5b9f9559b56bfebbeca3dd3fefb4c64c94df43f31fa38f58e4
drwx------  3 176535 176535 4096 Dec 27 14:48 fbcda0348c1d40f3c3effb9ffa5f148521a880fb427e0831c2381deea48902ff

This is very frustrating as whenever I want to add a new service, Gitea for example, I get this error and have no resolution steps beyond "Wipe everything out and try again".

Zivodor avatar Jan 06 '25 19:01 Zivodor

I'm having the same issue with podman pull:

> podman pull postgres:17.2-alpine3.21
Resolved "postgres" as an alias (/home/u1/.cache/containers/short-name-aliases.conf)
Trying to pull docker.io/library/postgres:17.2-alpine3.21...
Getting image source signatures
Copying blob fdcefadb5bb3 done   | 
Copying blob 3cf4f77660fd done   | 
Copying blob 1ddaf56854cd done   | 
Copying blob f562efc34463 done   | 
Copying blob 1f3e46996e29 skipped: already exists  
Copying blob d6eaa17dfd6a done   | 
Copying blob badd2a25d9ca done   | 
Copying blob f699f32c0574 done   | 
Copying blob 75de42a401ce done   | 
Copying blob c48dc11d8978 done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:1ddaf56854cd873be952033d07fd56f917cac4c4c2b122a36c82e66906015575"/""/"sha256:0a7931e438dd37f767106326540aa2a90a421e57a87f77caba966e5785f631a8": unpacking failed (error: exit status 1; output: container ID 70 cannot be mapped to a host ID)

volumes directory does exist:

> ls -al /home/u1/.local/share/containers/storage/volumes
total 12
drwx------  3 u1 u1 4096 Jan 25 20:29 .
drwx------ 10 u1 u1 4096 Jan 25 20:29 ..
drwx------  3 u1 u1 4096 Jan 25 20:29 ufo_postgresql_data

Fak3 avatar Jan 25 '25 16:01 Fak3

@ldesgrange podman pull docker.io/traefik:3

I can't reproduce with the steps you gave on a clean new user account:

u2@localhost:~/podtest> podman create --user 996:996 --uidmap=0:9000:1000 --gidmap=0:9000:1000 docker.io/authelia/authelia:latest
Trying to pull docker.io/authelia/authelia:latest...
Getting image source signatures
Copying blob 66dd6dfb3d40 done   | 
Copying blob 75be02e1156e done   | 
Copying blob 9c9d72ac3440 done   | 
Copying blob bd00b44f3ac7 done   | 
Copying blob a888b6b0b637 done   | 
Copying blob 38a8310d387e done   | 
Copying config 5d95c4f08b done   | 
Writing manifest to image destination
f895c3222b31970f880308a8086f30bf22017fab8ce8f3b818d430141e2131b4
u2@localhost:~/podtest> podman pull docker.io/traefik:3
Trying to pull docker.io/library/traefik:3...
Getting image source signatures
Copying blob 1f3e46996e29 done   | 
Copying blob 0c7e1a2deb57 done   | 
Copying blob 546f86135cc0 done   | 
Copying blob a8e9a2da4a5e done   | 
Copying config 88eafdd76c done   | 
Writing manifest to image destination
88eafdd76c933a76798a389d994b4fdd6b5edb89d702aae10c4350ecaa3febb9
> podman version
Client:       Podman Engine
Version:      5.3.1
API Version:  5.3.1
Go Version:   go1.23.4
Built:        Fri Dec  6 16:50:30 2024
OS/Arch:      linux/amd64
> id
uid=1000(u2) gid=1000(u2) groups=1000(u2)
> podman unshare cat /proc/self/uid_map
         0       1000          1
         1     100000      65536
u2@localhost:~/podtest> cat /etc/subuid
u2:100000:65536
u1:165536:65536
containers:300000:165536
dockremap:100000000:100000001
u2@localhost:~/podtest> cat /etc/subgid
u2:100000:65536
u1:165536:65536
containers:300000:99999
dockremap:100000000:100000001

Fak3 avatar Jan 26 '25 07:01 Fak3

Any updates on this? I am still getting this error every time I try to add a new application to my homelab. I just need to know what causes it so I can avoid doing that. I can't keep wiping all my podman data and then re-pulling every image every time.

I don't need a bug fix or anything like that, just some kind of understanding of what causes this specific issue to occur. I cannot find any documentation about this error, about what causes it, or about how to resolve it so any help at this point is appreciated.

Zivodor avatar Jun 12 '25 16:06 Zivodor

Update: Please go to this link. It worked for me. It's possible that uid and gid can have conflict. It's better to try with the commands belows.

# Remove the older ones
sudo sed -i "/$(whoami)/d" /etc/subuid
sudo sed -i "/$(whoami)/d" /etc/subgid


# Add new ones
echo "$(whoami):200000:65536" | sudo tee -a /etc/subuid
echo "$(whoami):200000:65536" | sudo tee -a /etc/subgid

Hi, having the same issue. I tried podman system migration.

Trying to pull registry.suse.com/postgres:13-alpine...
Error: initializing source docker://registry.suse.com/postgres:13-alpine: reading manifest 13-alpine in registry.suse.com/postgres: manifest unknown
   PostgreSQL is starting, waiting for 10 seconds...
3. Test table is creating...
Error: no container with name or ID "postgres-test" found: no such container
4. Python test container is running...
Resolved "python" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/python:3.12-slim...
Getting image source signatures
Copying blob 396b1da7636e done   | 
Copying blob d76206d463c0 done   | 
Copying blob e5c05dcf47fb done   | 
Copying blob 5d587318e932 done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:396b1da7636e2dcd10565cb4f2f952cbb4a8a38b58d3b86a2cacb172fb70117c": processing tar file(potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/gshadow): Check /etc/subuid and /etc/subgid if configured locally and run "podman system migrate": lchown /etc/gshadow: invalid argument): exit status 1
cat /etc/subuid
ddisc-ux:100000:65536
splunkfwd:165536:65536
nessus:231072:65536
dockremap:100000000:100000001
username:100000:65536

---------
cat /etc/subgid
ddisc-ux:100000:65536
splunkfwd:165536:65536
nessus:231072:65536
dockremap:100000000:100000001
username:100000:65536
podman version
Client:       Podman Engine
Version:      4.9.5
API Version:  4.9.5
Go Version:   go1.24.4
Built:        Mon Jun  2 14:00:00 2025
OS/Arch:      linux/amd64

hymekeci avatar Aug 14 '25 13:08 hymekeci