watchtower
watchtower copied to clipboard
[Feature Request] Support for Podman
Is your feature request related to a problem? Please describe.
Currently watchtower requires /var/run/docker.sock, which is not present in a system with only podman installed (i.e. where docker is not installed).
Describe the solution you'd like
Potentially supporting podman in the future.
Thanks!
Looks like podman has a docker-compatible REST api, so should be working out of the box as far as I can tell. I haven't tried it myself though: https://podman.io/blogs/2020/07/01/rest-versioning.html
if podman service is running, this should work:
podman run -v /var/run/podman/podman.sock:/var/run/docker.sock docker.io/containrrr/watchtower
Update; it sort of worked when I tried it:
2021-08-31T18:00:53Z [D] Doing a HEAD request to fetch a digest
url: https://index.docker.io/v2/containrrr/watchtower/manifests/latest
2021-08-31T18:00:53Z [D] Found a remote digest to compare with
remote: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
2021-08-31T18:00:53Z [D] Comparing
local: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
remote: sha256:3283e0b5be326d77ff4f4e8b7a91d46aaa1d511c74877b5a32f161548812d00c
2021-08-31T18:00:53Z [D] Found a match
2021-08-31T18:00:53Z [D] No pull needed. Skipping image.
2021-08-31T18:00:53Z [I] Found new docker.io/containrrr/watchtower:latest image (9167b324e914)
2021-08-31T18:00:53Z [D] This is the watchtower container /sweet_greider
2021-08-31T18:00:53Z [D] Renaming container /sweet_greider (4c4ec40ff1a6) to PPctWJctHvcNrpXpLaCSQaSdAmRtxEdN
2021-08-31T18:00:53Z [I] Creating /sweet_greider
2021-08-31T18:00:53Z [E] Error response from daemon: fill out specgen: ulimit option "RLIMIT_NOFILE=1048576:1048576" requires name=SOFT:HARD, failed to be parsed: invalid ulimit type: RLIMIT_NOFILE
2021-08-31T18:00:53Z [D] Session done: 1 scanned, 0 updated, 1 failed
Not really sure what is going on here. It checks the watchtower image and concludes that No pull needed. Skipping image., but then goes ahead and updates it anyway?
Then it fails when trying to rename itself. It might also just be that my podman installation is broken... perhaps someone with a known working setup can try it and share their experience?
The cause of the error above is: https://github.com/containers/podman/issues/9803 (Finding 4) Additionally, the "name" is not accepted by podman, since it has a leading slash:
INFO[0003] Creating /nginx-test
ERRO[0003] Error response from daemon: container create: error running container create option: names must match [a-zA-Z0-9][a-zA-Z0-9_.-]*: invalid argument
By patching these two in the new config I were able to successfully recreate a podman container:
// pkg/container/client.go:214
hostConfig.Ulimits = nil
name = name[1:]
Log output from successful run
nils@xiwangmu:~/src/watchtower $ go build && sudo ./watchtower --trace --run-once --host unix:///var/run/podman/podman.sock
DEBU[0000]
DEBU[0000] Sleeping for a second to ensure the docker api client has been properly initialized.
DEBU[0001] Making sure everything is sane before starting
INFO[0001] Watchtower v0.0.0-unknown
Using no notifications
Checking all containers (except explicitly disabled with label)
Running a one time update.
WARN[0001] trace level enabled: log will include sensitive information as credentials and tokens
DEBU[0001] Checking containers for updated images
DEBU[0001] Retrieving running containers
DEBU[0001] Trying to load authentication credentials. container=/nginx-test image="docker.io/library/nginx:latest"
DEBU[0001] No credentials for docker.io found config_file=/config.json
DEBU[0001] Got image name: docker.io/library/nginx:latest
DEBU[0001] Checking if pull is needed container=/nginx-test image="docker.io/library/nginx:latest"
DEBU[0001] Building challenge URL URL="https://index.docker.io/v2/"
DEBU[0001] Got response to challenge request header="Bearer realm=\"https://auth.docker.io/token\",service=\"registry.docker.io\"" status="401 Unauthorized"
DEBU[0001] Checking challenge header content realm="https://auth.docker.io/token" service=registry.docker.io
DEBU[0001] Setting scope for auth token image=docker.io/library/nginx scope="repository:library/nginx:pull"
DEBU[0001] No credentials found.
DEBU[0002] Parsing image ref host=index.docker.io image=docker.io/library/nginx normalized="docker.io/library/nginx:latest" tag=latest
TRAC[0002] Setting request token
DEBU[0002] Doing a HEAD request to fetch a digest url="https://index.docker.io/v2/library/nginx/manifests/latest"
DEBU[0002] Found a remote digest to compare with remote="sha256:4d4d96ac750af48c6a551d757c1cbfc071692309b491b70b2b8976e102dd3fef"
DEBU[0002] Comparing local="sha256:4d4d96ac750af48c6a551d757c1cbfc071692309b491b70b2b8976e102dd3fef" remote="sha256:4d4d96ac750af48c6a551d757c1cbfc071692309b491b70b2b8976e102dd3fef"
DEBU[0002] Found a match
DEBU[0002] No pull needed. Skipping image.
INFO[0002] Found new docker.io/library/nginx:latest image (a25dfb1cd178)
INFO[0002] Stopping /nginx-test (5008385716af) with SIGTERM
DEBU[0011] Removing container 5008385716af
INFO[0011] Creating nginx-test
DEBU[0011] Starting container /nginx-test (4a74f0351cfb)
DEBU[0011] Session done: 1 scanned, 0 updated, 0 failed
Waiting for the notification goroutine to finish
This still leaves the issue of it always trying to update the containers though:
DEBU[0002] No pull needed. Skipping image.
INFO[0002] Found new docker.io/library/nginx:latest image (a25dfb1cd178)
Found the cause of the image always being treated as "stale":
nils@xiwangmu:~ $ sudo curl -s --unix-socket /run/podman/podman.sock 'http://d/v3.0.0/containers/nginx-test/json' | jq -C '.Image'
"docker.io/library/nginx:latest"
Podman doesn't return the image ID in the container inspect result, giving the image "name" instead. This could be solved by using the podman-specific API endpoint:
sudo curl -s --unix-socket /run/podman/podman.sock 'http://d/v3.0.0/libpod/containers/nginx-test/json' | jq -C .Image
"a25dfb1cd178de4f942ab5a87d3d999e3f981bb9f36fc6ee38b04669e14c32d2"
So, overall, this could be implemented, but would require a special flag for "podman mode".
+1 I'm considering transitioning future systems to use Podman and would like to see these edge cases worked out.
+1 I'm considering transitioning future systems to use Podman and would like to see these edge cases worked out.
feel free to join in on the efforts 👍🏽
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Note that podman has something like watchtower built-in: https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html
Note that podman has something like watchtower built-in: https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html
I see one needs to create a systemd unit for the container needed to be auto-updated. If I were to have all my containers auto-updated, does it mean I need to create a systemd unit file for each one (instead of allowing the systemd unit to auto-update all containers, as well as those to be installed in the future)? Thanks
@piksel it now returns the image id, so this could be done. ULimits still an issue.
@R8s6 please read the linked docs. Podman generates the systemd files for you.
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime error
Seems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime errorSeems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
I have this error on multiple containers now. How can i fix what watchtower messed up?
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime errorSeems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
I have this error on multiple containers now. How can i fix what watchtower messed up?
remove the containers and create them again
The same error.
The same error.
Try to build podman with this commit: https://github.com/containers/podman/commit/6ea703b79880c7f5119fe5355074f8e971df6626
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e9e8b635337 quay.io/outline/shadowbox:stable /bin/sh -c /cmd.s... 39 hours ago Up 39 hours shadowbox
58d75a814a6a docker.io/containrrr/watchtower:latest --cleanup --label... 39 hours ago Up 39 hours watchtower
thanks!
one container could not be started after re-creation
time="2023-05-18T18:10:24Z" level=info msg="Found new docker.io/binwiederhier/ntfy:latest image (2434c49f7c33)"
time="2023-05-18T18:10:26Z" level=info msg="Stopping /ntfy (09b65f2ca3c5) with SIGTERM"
time="2023-05-18T18:10:27Z" level=info msg="Creating /ntfy"
time="2023-05-18T18:10:27Z" level=error msg="Error response from daemon: crun: cannot set memory swappiness with cgroupv2: OCI runtime error"
time="2023-05-18T18:10:27Z" level=info msg="Session done" Failed=1 Scanned=9 Updated=0 notify=no
time="2023-05-18T18:10:27Z" level=error msg="Failed to send shoutrrr notification" error="failed to send ntfy notification: got HTTP 502 Bad Gateway" index=0 notify=no service=ntfy
Same here.. it re-creates the container but it fails to start...
I made a PR, but still facing this issue:
cannot set memory swappiness with cgroupv2: OCI runtime errorSeems that podman does NOT like explicitly specifying everything, even if it was defaulted to that value.
Confirming issue is still present in podman release v4.6.0
Still present in 4.8.x. I'm using Nextcloud-AIO with podman rootless. For mastercontainer update it uses watchtower which works fine for docker but not for podman. Getting the swapiness-error on container start after watchtower created the new one.
Does watchtower use something similar like executing podman container clone ... via CLI?
I found this one related to the clone command.
https://github.com/containers/podman/issues/13916
Unfortunately it is unsolved. Maybe someone can raise a new issue to solve this in podman?
I have the same issue.