BUG: Routing in containers broken when using dhcp with NetworkManager
Issue Description
I'm running SuSE MicroOS with podman. I have a traefik container running which serves as a reverse proxy to the rest of my containers, and which also uses LetsEncrypt to get certificates.
Traefik is setup using quadlets, and I use the socket-activation method to get the ip addresses of the source correctly in my logs.
$ podman version
Client: Podman Engine
Version: 5.4.1
API Version: 5.4.1
Go Version: go1.24.1
Built: Mon Mar 17 09:47:14 2025
OS/Arch: linux/amd64
I noticed yesterday that it stopped serving LE certs and I was only getting the built-in self-signed one.
After some investigation I found out that the routing is going wrong, causing the traffic not to get routed out towards the internet.
The box in question is running on Netcup (VPS host), and the IP address is assigned using NetworkManager via dhcp.
# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d6:b6:e8:96:ed:58 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname enxd6b6e896ed58
inet 188.68.40.116/22 brd 188.68.43.255 scope global dynamic noprefixroute ens3
valid_lft 2678278sec preferred_lft 2678278sec
inet6 2a03:4000:17:fbf::116/128 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::d4b6:e8ff:fe96:ed58/64 scope link noprefixroute
valid_lft forever preferred_lft forever
Routing seems ok
# ip r s
default via 188.68.40.1 dev ens3 proto dhcp src 188.68.40.116 metric 20100
188.68.40.0/22 dev ens3 proto kernel scope link src 188.68.40.116 metric 100
However, checking with podman unshare --rootless-netns ip r s, the routing is nowhere to be seen:
$ podman unshare --rootless-netns ip r s
10.89.0.0/24 dev podman1 proto kernel scope link src 10.89.0.1
Since I had another machine with a static IP serving containers using podman, I reconfigured my Netcup host to use static IP assignment, after which the routing actually works correctly
# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d6:b6:e8:96:ed:58 brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname enxd6b6e896ed58
inet 188.68.40.116/22 brd 188.68.43.255 scope global noprefixroute ens3
valid_lft forever preferred_lft forever
inet6 2a03:4000:17:fbf::116/128 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::d4b6:e8ff:fe96:ed58/64 scope link noprefixroute
valid_lft forever preferred_lft forever
# ip r s
default via 188.68.40.1 dev ens3 proto static metric 100
188.68.40.0/22 dev ens3 proto kernel scope link src 188.68.40.116 metric 100
$ podman unshare --rootless-netns ip r s
default via 188.68.40.1 dev ens3 proto static metric 100
10.89.0.0/24 dev podman1 proto kernel scope link src 10.89.0.1
188.68.40.0/22 dev ens3 proto kernel scope link metric 100
Steps to reproduce the issue
Steps to reproduce the issue
- Have a VPS configured using DHCP with NetworkManager
Describe the results you received
Containers cannot route out towards the internet
Describe the results you expected
Everything just works
podman info output
> podman info
host:
arch: amd64
buildahVersion: 1.39.2
cgroupControllers:
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.13-1.2.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.13, commit: unknown'
cpuUtilization:
idlePercent: 95.56
systemPercent: 3.11
userPercent: 1.33
cpus: 4
databaseBackend: sqlite
distribution:
distribution: opensuse-microos
version: "20250321"
eventLogger: journald
freeLocks: 2040
hostname: hvergelmir.kcore.org
idMappings:
gidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
uidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
kernel: 6.13.7-1-default
linkmode: dynamic
logDriver: journald
memFree: 7078678528
memTotal: 8237662208
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.14.0-1.2.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.14.0
package: netavark-1.14.1-1.1.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.14.1
ociRuntime:
name: crun
package: crun-1.20-1.2.x86_64
path: /usr/bin/crun
version: |-
crun version 1.20
commit: 9c9a76ac11994701dd666c4f0b869ceffb599a66
rundir: /run/user/1001/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-20250217.a1e48a0-2.1.x86_64
version: ""
remoteSocket:
exists: true
path: /run/user/1001/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 0
swapTotal: 0
uptime: 0h 1m 32.00s
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /home/podman/.config/containers/storage.conf
containerStore:
number: 2
paused: 0
running: 2
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/lib/containers/user/podman/storage
graphRootAllocated: 527206166528
graphRootUsed: 4633415680
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 17
runRoot: /run/user/1001/containers
transientStore: false
volumePath: /var/lib/containers/user/podman/storage/volumes
version:
APIVersion: 5.4.1
Built: 1742201234
BuiltTime: Mon Mar 17 09:47:14 2025
GitCommit: ""
GoVersion: go1.24.1
Os: linux
OsArch: linux/amd64
Version: 5.4.1
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
SuSE MicroOS
Additional information
No response
I don't see how either DHCP or NetworkManager matter here. When pasta starts it should copy the ip addresses and routes from the default interface.
Can you actually reproduce this consistently? The only thing that might matter with quadlet is that units can start to early before the network is ready, however the main problems was sort of fixed in https://github.com/containers/podman/pull/24305 but that still depends that you network-online/target would actually wait long enough. And if it does not newer past aversions should fall back to some link local address with always a default route set so even then it should always have the route.
@Luap99 I can reproduce this.
Configure dhcp -> reboot -> broken routing Configure static -> reboot -> works.
I've used the podman-network-online-dummy.service from https://github.com/containers/podman/issues/24796 to work-around the issues with user services and network-online state.
Well all I can say that NetworkManager + dhcp should work fine generally with pasta, if not we would have ton of reports of it already unless some recent update broke it? What are your versions? Can you reproduce this on another distro?
cc @sbrivio-rh Ideas why the default route would be missing?
There was another user (Kanibal) on irc #podman that saw the same issues on Arch iirc. I'll see to spin up an F41 VM.
cc @sbrivio-rh Ideas why the default route would be missing?
No clear hypothesis yet... just one thing: 10.89.0.1 is the address configured in the container., but it doesn't match any address on the host, where the default gateway is 188.68.40.1.
@jdeluyck are you configuring an explicit address for the container? If yes, you also need to configure an explicit default gateway.
@sbrivio-rh no, ip assignment is done automatically.
No clear hypothesis yet... just one thing: 10.89.0.1 is the address configured in the container., but it doesn't match any address on the host, where the default gateway is 188.68.40.1.
That is just the podman bridge network not the interface addresses that pasta copied.
@jdeluyck Maybe to make the reproducer simpler can you run a container with pasta directly and not use the custom networks, i.e. use podman run --network pasta ... and then check the ip addresses/routing there. That would allow us to add extra pasta debug options easily.
No clear hypothesis yet... just one thing: 10.89.0.1 is the address configured in the container., but it doesn't match any address on the host, where the default gateway is 188.68.40.1.
That is just the podman bridge network not the interface addresses that pasta copied.
Ah oops. I got confused by the fact that there's no device route at all... but yes, that comes from NetworkManager's noprefixroute (which we copy) and that should work together with https://passt.top/passt/commit/?id=f301bb18b5b30ad93a706616bdc581c42ba4cbe2. But oops, it doesn't.
You could also try reproducing this with pasta directly, without even using Podman: pasta --config-net -- ip route show might already show the issue. In that case, it would be great to have --debug output and perhaps a strace too.
Reconfigured to dhcp
$ podman unshare --rootless-netns ip r s
10.89.0.0/24 dev podman1 proto kernel scope link src 10.89.0.1
$ pasta --config-net -- ip route show
default via 188.68.40.1 dev ens3 proto dhcp metric 100
188.68.40.0/22 dev ens3 proto kernel scope link metric 100
$ pasta --config-net --debug -- ip route show
0.0269: Template interface: ens3 (IPv4), ens3 (IPv6)
0.0269: Namespace interface: ens3
0.0269: MAC:
0.0269: host: 9a:55:9a:55:9a:55
0.0269: NAT to host 127.0.0.1: 188.68.40.1
0.0269: DHCP:
0.0270: assign: 188.68.40.116
0.0270: mask: 255.255.252.0
0.0270: router: 188.68.40.1
0.0270: DNS:
0.0270: 46.38.225.230
0.0270: 46.38.252.230
0.0270: DNS search list:
0.0270: kcore.org
0.0270: NAT to host ::1: fe80::1
0.0270: NDP/DHCPv6:
0.0270: assign: 2a03:4000:17:fbf::116
0.0270: router: fe80::1
0.0270: our link-local: fe80::1
0.0271: DNS search list:
0.0271: kcore.org
0.0362: SO_PEEK_OFF supported
0.0363: TCP_INFO tcpi_snd_wnd field supported
0.0363: TCP_INFO tcpi_bytes_acked field supported
0.0363: TCP_INFO tcpi_min_rtt field supported
default via 188.68.40.1 dev ens3 proto dhcp metric 100
188.68.40.0/22 dev ens3 proto kernel scope link metric 100
@sbrivio-rh looks fine to me?
I am the other person from IRC yesterday - I have the same issue. My setup is practically identical (netcup VPS, Traefik reverse proxy, systemd Quadlet Units + Socket Activation), but I am running Fedora 41 Server Edition.
The pasta command line:
$ pgrep -a pasta
1521 /usr/bin/pasta --config-net --pid /run/user/1000/containers/networks/rootless-netns/rootless-netns-conn.pid --dns-forward 169.254.1.1 -t none -u none -T none -U none --quiet --netns /run/user/1000/containers/networks/rootless-netns/rootless-netns --map-guest-addr 169.254.1.2
2359 /usr/bin/pasta --config-net --dns-forward 169.254.1.1 -t none -u none -T none -U none --quiet --netns /run/user/1000/netns/netns-df8b8777-6358-edc8-708d-237d152835f1 --map-guest-addr 169.254.1.2
My versions:
$ podman info
host:
arch: arm64
buildahVersion: 1.39.2
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.13-1.fc41.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.13, commit: '
cpuUtilization:
idlePercent: 99.28
systemPercent: 0.28
userPercent: 0.45
cpus: 6
databaseBackend: sqlite
distribution:
distribution: fedora
variant: server
version: "41"
eventLogger: journald
freeLocks: 2021
hostname: lantern
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 524288
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 524288
size: 65536
kernel: 6.13.7-200.fc41.aarch64
linkmode: dynamic
logDriver: journald
memFree: 1094545408
memTotal: 8290492416
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.14.0-1.fc41.aarch64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.14.0
package: netavark-1.14.0-1.fc41.aarch64
path: /usr/libexec/podman/netavark
version: netavark 1.14.0
ociRuntime:
name: crun
package: crun-1.20-2.fc41.aarch64
path: /usr/bin/crun
version: |-
crun version 1.20
commit: 9c9a76ac11994701dd666c4f0b869ceffb599a66
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20250320.g32f6212-2.fc41.aarch64
version: ""
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 8290037760
swapTotal: 8290037760
uptime: 14h 30m 8.00s (Approximately 0.58 days)
variant: v8
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
store:
configFile: /home/user/.config/containers/storage.conf
containerStore:
number: 15
paused: 0
running: 15
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/user/.local/share/containers/storage
graphRootAllocated: 548034052096
graphRootUsed: 30520229888
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 26
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/user/.local/share/containers/storage/volumes
version:
APIVersion: 5.4.1
BuildOrigin: Fedora Project
Built: 1741651200
BuiltTime: Tue Mar 11 01:00:00 2025
GitCommit: b79bc8afe796cba51dd906270a7e1056ccdfcf9e
GoVersion: go1.23.7
Os: linux
OsArch: linux/arm64
Version: 5.4.1
@Luap99
I've torn down all my containers, removed the quadlets and rebooted. This way I was certain there was nothing left. No pasta running after reboot.
Started a dummy container
$ podman run --network pasta nginx:latest
$ podman unshare --rootless-netns ip r s
default via 188.68.40.1 dev ens3 proto dhcp metric 100
188.68.40.0/22 dev ens3 proto kernel scope link metric 100
$ ps -ef | grep pasta
podman 2532 1 0 13:17 ? 00:00:00 /usr/bin/pasta --config-net --dns-forward 169.254.1.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1001/netns/netns-24e4ce61-a3da-fdd9-efd0-f665fdff3b8d --map-guest-addr 169.254.1.2
So this looks ok now?!
This is my proxy.network that I'm using with Traefik:
[Network]
NetworkName=proxy
IPv6=true
which is also about as basic as it'd get.
Container is also pretty basic (removed a bunch of volumes etc...)
[Container]
AutoUpdate=registry
ContainerName=traefik
Environment=TZ=Europe/Brussels
Network=proxy.network
Image=docker.io/traefik:v3.3
[Service]
Restart=on-failure
Sockets=http.socket https.socket
[Install]
WantedBy=multi-user.target default.target
[Unit]
Description=Traefik container
After=http.socket https.socket
Requires=http.socket https.socket
and the socket files are also plain
[Socket]
ListenStream=443
FileDescriptorName=https
Service=traefik.service
[Install]
WantedBy=sockets.target
[Socket]
ListenStream=80
FileDescriptorName=http
Service=traefik.service
[Install]
WantedBy=sockets.target
I've now manually started traefik, and the routing looks right at this moment.
Could this be a race condition?
On a whim I added
ExecStartPre=/bin/sleep 60
to my [Service] section, and restarted.
After a reboot, there's no pasta running yet, and I see the following routing
$ podman unshare --rootless-netns ip r s
default via 188.68.40.1 dev ens3 proto dhcp metric 100
188.68.40.0/22 dev ens3 proto kernel scope link metric 100
Note how now it's correct, but missing the subnet for the proxy network. When the container/pasta finally launch, it also adds that
$ podman unshare --rootless-netns ip r s
default via 188.68.40.1 dev ens3 proto dhcp metric 100
10.89.0.0/24 dev podman1 proto kernel scope link src 10.89.0.1
188.68.40.0/22 dev ens3 proto kernel scope link metric 100
So this really feels like a race condition.
I guess it depends on your definition of "race condition": yes, if you bring up pasta before networking is ready, it will configure link-local address and routes, see the "Local mode for disconnected setups" section in pasta(1). But this should happen reliably.
An open question is whether that "local mode" prevents anything from working in your setup. I don't have an answer to that yet, it would be great if you could investigate this, by e.g. adding pasta options for debug and logging (--debug, --log-file) and starting the container with podman-run early enough at boot, or in whatever problematic condition you found.
Note that we plan to address this eventually with a netlink monitor, checking for changes of addresses, prefixes, and routes on the host, and reflecting them into the container according to some policy. The target is to make the fact that pasta started before or after network configuration less relevant.
See also https://github.com/containers/podman/issues/22959#issuecomment-2228900989 and https://github.com/containers/podman/discussions/22737#discussioncomment-11410544. Patches indeed welcome if you are interested in contributing (bits of) this feature.
I am not seeing the effect of what the "Local mode for disconnected setups", or atleast nothing on that IP range. I always - reliably - get the correct ip range, yet no routing. The routing is needed (since I need to call out towards LE). Incoming traffic works fine.
Is there a way to add pasta command line arguments to quadlets?
Is there a way to add pasta command line arguments to quadlets?
When using custom networks (podman network ls) then they must be set in containers.conf (pasta_options), When using the normal pasta network mode then you can set the full --network argument the same way in quadlet Network=pasta:--debug,--log-file,/tmp/somefile The log file is needed as the output would not be shown by podman by default I think.
I experience similar problems here with a default Debian testing and podman 5.4.2 installation here (hence no NetworkManager). After a reboot, routing does not work anymore. I have to stop the containers, doing a podman network reload --all and restart the containers.
Here's the netext.network:
[Unit]
Description=External Podman Network
[Network]
DisableDNS=false
Internal=false
And one of the containers, here stalwart.container:
[Container]
Image=docker.io/stalwartlabs/mail-server:latest
ContainerName=stalwart
AutoUpdate=local
PublishPort=10443:443
PublishPort=10025:25
PublishPort=10465:465
PublishPort=10587:587
PublishPort=10993:993
PublishPort=14190:4190
Network=netext.network
Network=netint.network
Timezone=Europe/Berlin
Volume=/srv/podvols/stalwart-mail:/opt/stalwart-mail
[Unit]
Requires=dbpg.service
After=dbpg.service
[Service]
TimeoutStartSec=600
Restart=always
[Install]
WantedBy=default.target
If I understand correctly, a workaround could be to configure the network manually, is that right? May it be worth to try systemd-networkd instead of the debian default interfaces?
@Luap99
Do you have a working network-online.target setup?
I'm not sure if I can answer the question. At least systemctl status network-online.target tells me that its active. Anythin else I can/should check regarding network-online.target?
In any case it would be useful if someone could capture addresses and routes on the host when pasta is started and/or proves the logfile is described in the other issue.
If I can manage that, I'd be happy to. But I'm afraid I need some help with that (I am quite new to the whole container topic). So should I replace the Network=netext.network and Network=netint.network lines with Network=pasta:…-lines? What would that be if these were my current networks?
netint.network:
[Network]
NetworkName=netint
DisableDNS=false
Internal=true
netext.network:
[Network]
NetworkName=netext
DisableDNS=false
Internal=false
Or can I leave the configuration as it is and switch on the debug output elsewhere?
I have just realized that it behaves as follows:
- Rebooting
- No routing:
podman run --rm -it --network netext alpine ping podman.ioshows 100% loss systemctl --user stop stalwart.service dbpg.service [...]podman run --rm -it --network netext alpine ping podman.ioworkssystemctl --user start stalwart.service dbpg.service [...]-> routing ok
So, other than I stated three days ago, podman network reload --all is not necessary. I wonder why simply stopping the containers is enough to bring routing back. 🤔
I wonder why simply stopping the containers is enough to bring routing back. 🤔
Because if all containers are stopped pasta is stopped too. And then it is started again when a container uses it.
For custom networks you need to set the pasta options in containers.conf
[network]
pasta_options = ["--debug", "--log-file", "/tmp/pasta.log"]
Then restart and show what is says when the routing is broken.
This is what I got directly after the system has rebootet. Slightly obfuscated. Please let me know if there's more I can do.
EDIT: btw, IPv6 has been configured statically. IPv4 uses DHCP.
0.0020: info: No interfaces with usable IPv4 routes
0.0020: Failed to detect external interface for IPv4
Well this explains why it is not working then. By the time pasta starts there are no ipv4 address configured but there is ipv6 so pasta switched into ipv6 only mode?! @sbrivio-rh Should pasta use the "local mode" for the other address type in such case to avoid these races?
I am not sure how network-online.target is configures but it doesn't seem to wait until you have the ipv4 addresses configured so pasta doesn't see them.
Unfortunately the whole wait for network-online.target thing work around I added for https://github.com/containers/podman/issues/22197 doesn't seem to work correctly (https://github.com/containers/podman/issues/24796) and for systems where network-online is not fully online it doesn't help anyway.
Not sure what the best option is, as work around you should be able to modify podman-user-wait-network-online.service to wait for an ipv4 or something.
By the time pasta starts there are no ipv4 address configured but there is ipv6 so pasta switched into ipv6 only mode?!
That could be the case, yes. At the moment we enable local mode only if we can't find any IP version with a routable interface.
@sbrivio-rh Should pasta use the "local mode" for the other address type in such case to avoid these races?
I guess that would fix the issue. Other than being a bit weird, because it would be enabled the whole time on IPv4-only or IPv6-only setups, I don't see any obvious downside. I wonder if @dgibson can think of any issue with this... if not, I would proceed with it.
Not sure what the best option is, as work around you should be able to modify podman-user-wait-network-online.service to wait for an ipv4 or something.
By the way, eventually, we plan to address this with a netlink monitor approach, but that will take a while unless somebody else feels like stepping up and sending patches for it.
@Luap99 Thanks a lot for your excellent support. I managed to write a quick and (very) dirty script which checks for IPv4, so that the line in .config/systemd/user/podman-user-wait-network-online.service.d/override.conf now reads ExecStart=/bin/sh -c 'until systemctl is-active network-online.target && /usr/local/bin/checkipv4.sh; do sleep 0.5; done'.
Landed here from the duplicate issue https://github.com/containers/podman/issues/25859. While waiting for a fix, I'd like to share another possible workaround to wait until the internet connection becomes available. In this case, by delaying network-online.target:
[Unit]
DefaultDependencies=no
After=nss-lookup.target
Before=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=sh -c 'until ping -c 1 google.com; do sleep 3; done'
TimeoutStartSec=120s
[Install]
WantedBy=network-online.target
Original idea found here.
Does anyone know if the problem has been fixed with Podman 5.5.0? I didn't notice anything in the release notes, but maybe I missed something.
Does anyone know if the problem has been fixed with Podman 5.5.0?
No, it wasn't. We still need to implement a route monitoring mechanism in pasta, and then probably some matching changes in Podman.