Image signing fails with hardware gpg key
Issue Description
I use YubiKey for GPG and when trying to sign image during podman push --sign-by it fails to locate card. Nowadays GnuPG has agent socket located in $XDG_RUNTIME_DIR/gnupg directory, which usually translates to /run/user/<uid>/gnupg. It appears that podman push uses user namespace by default so it tries looking up /run/user/0/gnupg and fails. This ultimately leads to startup of another gpg-agent instance (with socket in $HOME/.gnupg due to permission issues in /run/user) and the new instance fails to lookup card. If symlink is created temporarily /run/user/0 -> /run/user/<uid> signing works fine.
Steps to reproduce the issue
Steps to reproduce the issue
- Sign image with
gpg-agentrunning and having socket in$XDG_RUNTIME_DIR/gnupg
Describe the results you received
New gpg-agent instance is started.
Describe the results you expected
Already running gpg-agent instance should be used.
podman info output
host:
arch: arm64
buildahVersion: 1.37.1
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: Unknown
path: /usr/bin/conmon
version: 'conmon version 2.1.12, commit: 235d05815a414932b651d474a8cb6462d512153a'
cpuUtilization:
idlePercent: 95.36
systemPercent: 1.78
userPercent: 2.86
cpus: 8
databaseBackend: sqlite
distribution:
distribution: pld
version: "3.0"
eventLogger: journald
freeLocks: 2045
hostname: rock
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.1.43-14-rk2312
linkmode: dynamic
logDriver: journald
memFree: 4751888384
memTotal: 16481316864
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: Unknown
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.12.0
package: Unknown
path: /usr/libexec/podman/netavark
version: netavark 1.12.0
ociRuntime:
name: crun
package: Unknown
path: /usr/bin/crun
version: |-
crun version 1.16.1
commit: afa829ca0122bd5e1d67f1f38e6cc348027e3c32
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: Unknown
version: |
pasta 2024_07_26.57a21d2
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: false
path: /run/user/1000/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: ""
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: Unknown
version: |-
slirp4netns version 1.3.1
commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
libslirp: 4.8.0
SLIRP_CONFIG_VERSION_MAX: 5
libseccomp: 2.5.5
swapFree: 8236298240
swapTotal: 8239706112
uptime: 21h 38m 21.00s (Approximately 0.88 days)
variant: v8
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /home/users/jan/.config/containers/storage.conf
containerStore:
number: 3
paused: 0
running: 3
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /mnt/build-storage/jan/containers
graphRootAllocated: 269427478528
graphRootUsed: 3615887360
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /home/users/jan/tmp
imageStore:
number: 3
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /mnt/build-storage/jan/containers/volumes
version:
APIVersion: 5.2.1
Built: 1723674833
BuiltTime: Thu Aug 15 00:33:53 2024
GitCommit: ""
GoVersion: go1.22.6
Os: linux
OsArch: linux/arm64
Version: 5.2.1
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
Additional environment details
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
Although I've just found out about #16406 which makes signing process not very practical anyway...
In general rootless podman runs in a user namespace where we are mapped as root so if we execute other binaries the logically assume they run as root (uid 0). For containers we have something like --userns keep-id that maps the uid on the host to the same uid in the container. I wonder if this i something we can do when we know we invoke external commands that need a proper uid setup as well.
cc @mtrmac
It might be possible but it seems non-trivial to me.
From Podman’s point of view, the signing happens as a part of push operation in c/common/libimage — and I think we do need to run at least the c/storage accesses in the typical user namespace (depending on details of the graph driver and specific filesystem backend, but at least in the fallback “naive diff” implementation). c/storage is, I think, not set up to specifically identify / isolate the parts that need user namespace from the rest; that would require a detailed codebase audit.
If the suggestion is to run in the typical user namespace, but only to run the signing process in a nested more specialized ID mapping environment: c/image probably shouldn’t know the details, but passing, as an option, a function to use for all subprocess creations instead of the standard-library os/exec would make sense to me, as a general principle…
… in practice, here, the GPG subprocesses are executed by a C library libgpgme, so we just don’t have that kind of control; we would have to introduce an extra IPC layer from podman+c/image to our own single-use GPG server running in a modified namespace, which then further uses libgpgme to run GPG subprocesses.
I’m also worried that such a nested-user-namespace setup could have other unexpected effects: the GPG agent is, typically, an user-account-shared resource, potentially (as in here) started on-demand on the first use, so we could create an agent in an unusual/unexpected namespace configuration and affect all future non-Podman operations. And if we are talking about smart cards and other non-plain-vanilla setups, I’m a single UID/GID mapping might not be sufficient to replicate all the expectations of that software. It would be far better to start the agent from outside of Podman’s namespaces, and let the signing only trigger a request.
Would it work for the caller to explicitly specify GNUPGHOME? Looking at https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=common/homedir.c;h=392910867feb5cc0296ec0b34b5c3404eb017fc9;hb=refs/heads/master#l1416, that might be a workaround.
Podman already manipulates XDG_RUNTIME_DIR for the user-namespace processes, exactly to work around these situations, but from a quick skim of this GPG code it seems to me that GPG does not actually read that variable, it hard-codes the /run/user convention.
Note that GNUPGHOME is used for other purposes too, so while it could be a workaround for finding socket if set to /run/user, relevant configuration in ie gpg.conf would be lost.
If the suggestion is to run in the typical user namespace, but only to run the signing process in a nested more specialized ID mapping environment: c/image probably shouldn’t know the details, but passing, as an option, a function to use for all subprocess creations instead of the standard-library os/exec would make sense to me, as a general principle… … in practice, here, the GPG subprocesses are executed by a C library libgpgme, so we just don’t have that kind of control; we would have to introduce an extra IPC layer from podman+c/image to our own single-use GPG server running in a modified namespace, which then further uses libgpgme to run GPG subprocesses.
Right this is what I was thinking, if we do not directly execute these commands anyway then the extra work is certainly high.
I’m also worried that such a nested-user-namespace setup could have other unexpected effects: the GPG agent is, typically, an user-account-shared resource, potentially (as in here) started on-demand on the first use, so we could create an agent in an unusual/unexpected namespace configuration and affect all future non-Podman operations. And if we are talking about smart cards and other non-plain-vanilla setups, I’m a single UID/GID mapping might not be sufficient to replicate all the expectations of that software. It would be far better to start the agent from outside of Podman’s namespaces, and let the signing only trigger a request.
Well we already execute it from within the podman userns today, with very few exceptions podman always runs in the user namespace and of course there is no way to unjoin it thus my suggest to at least somehow fix the id mappings so I don't think it would make things any worse. But yes whenever the result will be any better I am not sure either.
A friendly reminder that this issue had no activity for 30 days.
Any chance of fixing this? For now I have a symlink work around:
cd ~/.gnupg
for i in S.gpg-agent.ssh S.gpg-agent.extra S.gpg-agent.browser S.gpg-agent S.scdaemon; do
ln -s "${XDG_RUNTIME_DIR}/gnupg/$i" "$i";
done