[BUG] Auth file not created after successful login
Version
podman version 3.4.2 Ubuntu 20.4 LTS
Describe the bug
I've had a hard time to setup Podman first on Ubuntu 20.4 then I got stuck with UID and GID related issue. And now, I'm stuck with an issue with login step. I'm passing username and token from environment variables to the login step and it does login successfully. However, just after login it complains regarding missing auth.json file and the workflow fails. Though the Podman supports --authfile parameter which I don't see in podman-login action.
Action code:
- name: login to image registry
uses: redhat-actions/podman-login@v1
with:
username: ${{ env.OCP_SERVICE_ACCOUNT_NAME }}
password: ${{ env.OCP_SERVICE_ACCOUNT_TOKEN }}
registry: ${{ env.OCP_BASE_EXM_IMAGE_REPOSITORY_URL }}
Steps to reproduce, workflow links, screenshots
Probably restarting my self-hosted runner caused this problem but the file has to be created automatically right after the successful login.
Attaching the screenshot as well.

Output of podman info --debug
podman info --debug
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.1.0, commit: '
cpus: 2
distribution:
codename: focal
distribution: ubuntu
version: "20.04"
eventLogger: journald
hostname: vm-integration-ocp
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.13.0-1021-azure
linkmode: dynamic
logDriver: journald
memFree: 2607464448
memTotal: 8340815872
ociRuntime:
name: crun
package: 'crun: /usr/bin/crun'
path: /usr/bin/crun
version: |-
crun version UNKNOWN
commit: ea1fe3938eefa14eb707f1d22adff4db670645d6
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: 'slirp4netns: /usr/bin/slirp4netns'
version: |-
slirp4netns version 1.1.8
commit: unknown
libslirp: 4.3.1-git
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 0
swapTotal: 0
uptime: 2h 27m 54.28s (Approximately 0.08 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- docker.io
- quay.io
store:
configFile: /home/deuser/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions:
overlay.ignore_chown_errors: "true"
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: 'fuse-overlayfs: /usr/bin/fuse-overlayfs'
Version: |-
fusermount3 version: 3.9.0
fuse-overlayfs: version 1.5
FUSE library version 3.9.0
using FUSE kernel interface version 7.31
graphRoot: /home/deuser/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 9
runRoot: /run/user/1000/containers
volumePath: /home/deuser/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.2
Built: 0
BuiltTime: Thu Jan 1 00:00:00 1970
GitCommit: ""
GoVersion: go1.16.6
OsArch: linux/amd64
Version: 3.4.2
```
`
Ouput of podman login:
INFO[0000] podman filtering at log level debug DEBU[0000] Called logout.PersistentPreRunE(podman --log-level debug logout default-registly**** (hidden)/observability/was-example) DEBU[0000] overlay storage already configured with a mount-program DEBU[0000] Merged system config "/usr/share/containers/containers.conf" DEBU[0000] overlay storage already configured with a mount-program DEBU[0000] Using conmon: "/usr/libexec/podman/conmon" DEBU[0000] Initializing boltdb state at /home/deuser/.local/share/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /home/deuser/.local/share/containers/storage DEBU[0000] Using run root /run/user/1000/containers DEBU[0000] Using static dir /home/deuser/.local/share/containers/storage/libpod DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp DEBU[0000] Using volume path /home/deuser/.local/share/containers/storage/volumes DEBU[0000] overlay storage already configured with a mount-program DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs DEBU[0000] overlay: ignore_chown_errors=true DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false DEBU[0000] Initializing event backend journald DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument DEBU[0000] Using OCI runtime "/usr/bin/crun" INFO[0000] Found CNI network podman (type=bridge) at /home/deuser/.config/cni/net.d/87-podman.conflist DEBU[0000] Default CNI network name podman is unchangeable INFO[0000] Setting parallel job count to 7 DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" Removed login credentials for default-registly**** (hidden)/observability/was-example DEBU[0000] Called logout.PersistentPostRunE(podman --log-level debug logout default-registly**** (hidden)/observability/was-example) deuser@vm-integration-ocp:~$ podman --log-level debug login default-registly**** (hidden)/observability/was-example INFO[0000] podman filtering at log level debug DEBU[0000] Called login.PersistentPreRunE(podman --log-level debug login default-registly**** (hidden)/observability/was-example) DEBU[0000] overlay storage already configured with a mount-program DEBU[0000] Merged system config "/usr/share/containers/containers.conf" DEBU[0000] overlay storage already configured with a mount-program DEBU[0000] Using conmon: "/usr/libexec/podman/conmon" DEBU[0000] Initializing boltdb state at /home/deuser/.local/share/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /home/deuser/.local/share/containers/storage DEBU[0000] Using run root /run/user/1000/containers DEBU[0000] Using static dir /home/deuser/.local/share/containers/storage/libpod DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp DEBU[0000] Using volume path /home/deuser/.local/share/containers/storage/volumes DEBU[0000] overlay storage already configured with a mount-program DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs DEBU[0000] overlay: ignore_chown_errors=true DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false DEBU[0000] Initializing event backend journald DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument DEBU[0000] Using OCI runtime "/usr/bin/crun" INFO[0000] Found CNI network podman (type=bridge) at /home/deuser/.config/cni/net.d/87-podman.conflist DEBU[0000] Default CNI network name podman is unchangeable INFO[0000] Setting parallel job count to 7 DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" DEBU[0000] Found credentials for default-registly**** (hidden) in credential helper containers-auth.json Authenticating with existing credentials for https://image-registry******* (hidden)/observability/was-example DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/d https://image-registry******* (hidden) DEBU[0000] GET https://image-regsitry DEBU[0000] Ping https://image-regsitry status 401 DEBU[0000] GET https://image-registry******* (hidden) Existing credentials are invalid, please enter valid username and password Username (oc rsharm13@.com): rsharm@* Password: DEBU[0040] Looking for TLS certificates and private keys in /etc/docker/certs.d/default-registly**** (hidden) DEBU[0040] GET https://image-regsitry DEBU[0040] Ping https://image-regsitry status 401 DEBU[0040] GET https://image-registry******* (hidden) DEBU[0040] Increasing token expiration to: 60 seconds DEBU[0040] GET https://image-regsitry DEBU[0040] Stored credentials for https://image-registry******* (hidden)/observability/was-example in credential helper containers-auth.json Login Succeeded! DEBU[0040] Called login.PersistentPostRunE(podman --log-level debug login https://image-registry******* (hidden)/observability/was-example)
I tried that with the self hosted runner, it works totally fine.
I think the problem with the self hosted runner that you are using is that - after podman login by default auth file is written at XDG_RUNTIME_DIR/containers/auth.json or /tmp/podman-run-1000/containers/auth.json but in your case, this file is not getting generated.
Although in the logs that you shared I do see that it is storing in containers-auth.json.
Stored credentials for https://image-registry/******* (hidden)/observability/was-example in credential helper containers-auth.json
Are you specifying containers-auth.json file anywhere while login?
Also containers-auth.json denotes the defaults that I mentioned above. Ref: https://man.archlinux.org/man/community/containers-common/containers-auth.json.5.en
I don't see containers-auth.json on my system either nor I removed auth.json file manually. But I do have /home/deuser/.docker/config.json available on the system. I understand that the existence of this file will not signal to create auth.json but that should even not fail the podman-login action?
When I tried on my machine, I had $HOME/.docker/config.json then also auth.json was created after podman login.
Will it be possible for you to manually run podman login on your self hosted runner?
There is a new option for --verbose introduced in podman login that would help us to know the exact path of the auth file.
Let me know if you can run that, otherwise I'll make some changes in this action to run the podman login with --verbose
Also created issue for your request on having authfile flag with podman login https://github.com/redhat-actions/podman-login/issues/19
We have added support for custom auth file and verbose output in login. Please use the latest version (v1/v1.3) and re-run the workflow. Also please share the updated logs, so that I can know the exact problem.
@rajeevsh990 any update?
Hi,
I'm experiencing this issue. I'm using v1.4 on a freshly installed self-hosted runner (I had no /local/home/github_runner/.docker/config.json). First error was that /local/home/github_runner/.docker/config.json didn't exist, so I logged into self-hosted runner and created and empty config.json. Then got the following error:
Run redhat-actions/[email protected]
/bin/podman version
/bin/podman login artifactory.boschdevcloud.com/cross-functions-docker-virtual -u maa1pm -p *** --verbose
Used: /tmp/podman-run-501/containers/auth.json
Login Succeeded!
✅ Successfully logged in to artifactory.boschdevcloud.com/cross-functions-docker-virtual as maa1pm
Exporting REGISTRY_AUTH_FILE=/tmp/podman-run-501/containers/auth.json
✍️ Writing registry credentials to "/local/home/github_runner/.docker/config.json"
Error: TypeError: Cannot set property 'artifactory.boschdevcloud.com/cross-functions-docker-virtual' of undefined
XDG_RUNTIME_DIR was empty so podman login was creating config file in /tmp/podman-run-501/containers/auth.json.
To fix the issue I had to manually copy the file cp /tmp/podman-run-501/containers/auth.json /local/home/github_runner/.docker/config.json and then the error was gone. So what is failing is that last step or creating a right /local/home/github_runner/.docker/config.json file.
I'm confirming that @amarruedo's workaround works, with this command:
$ cp /run/user/1000/containers/auth.json /home/<user>/.docker/config.json
But why isn't this podman login module creating a valid config.json automatically?
Hi,
We use buildah docker image (quay.io/redhat-github-actions/buildah-runner:v2) to run our build jobs with runners hosted in OpenShift cluster, we have the same issue.
Temporary workaround :
- cp auth.json : work but need to create docker folder before and when pod are reallocated we lost this config.
- use buildah login command instead podman-login action.
Also I tried to login with podman directly from the container and that work but that doesn't create docker folder and config.