docs
docs copied to clipboard
Fedora 38 Cloud + Ubuntu 22.04 container in bridge mode receives partial answers only
Is this a docs issue?
- [X] My issue is about the documentation content or website
Type of issue
I can't find what I'm looking for
Description
Fedora 38 Cloud:
$ sudo dnf install -y docker-ce
$ systemctl restart docker
$ docker run -ti --rm ubuntu:22.04 bash
$ apt update
# timeouts
Adding --network=host
helps, but I need it without for docker buildx build
.
$ dnf install -y docker-ce
$ systemctl restart docker
$ docker run -ti --rm --network=host ubuntu:22.04 bash
$ apt update
# instant responses
According to Wireshark, DNS resolutions with 8.8.8.8 works fine (No. 1-39). HTTP/1.1 GET-Requests are being sent via eth0 to Ubuntu servers and responses are being received, but not forwarded to the docker bridge network (No. 32+).
docker: 172.17.0.2, DNS: 10.8.1.53, Fedora: 10.0.2.2, Google: 8.8.8.8, Ubuntu Server: 91.189.91.83
Adding IPForward=kernel
or IPForward=true
to '/usr/lib/systemd/network/80-container-host0.network' does not help.
- https://docs.docker.com/engine/install/troubleshoot/#ip-forwarding-problems
- https://docs.fedoraproject.org/en-US/fedora-server/administration/virtual-routing-bridge/
$ uname -sro
Linux 6.5.8-200.fc38.x86_64 GNU/Linux
$ docker --version
Docker version 24.0.6, build ed223bc
$ firewall-cmd --version
1.3.4
$ cat /proc/sys/net/ipv4/ip_forward
1
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc mq state UP group default qlen 1000
link/ether 02:0a:c0:00:00:2c brd ff:ff:ff:ff:ff:ff
altname enp1s0
inet 10.0.2.2/24 brd 10.0.2.255 scope global dynamic noprefixroute eth0
valid_lft 86307938sec preferred_lft 86307938sec
inet6 fe80::a:c0ff:fe00:2c/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c9:ce:d4:a2 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:c9ff:fece:d4a2/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
7: veth0f05ae0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ea:eb:a0:0b:55:62 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e8eb:a0ff:fe0b:5562/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
$ ip route
default via 10.0.2.1 dev eth0 proto dhcp src 10.0.2.2 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.2 metric 100
127.0.0.0/8 dev lo proto kernel scope link src 127.0.0.1 metric 30
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
$ firewall-cmd --get-active-zones
docker
interfaces: docker0
public
interfaces: lo eth0
$ firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 lo
sources:
services: dhcpv6-client mdns ssh
ports: 1-65535/tcp 1-65535/udp
protocols:
forward: yes
masquerade: no
forward-ports:
source-ports: 1-65535/tcp 1-65535/udp
icmp-blocks:
rich rules:
$ firewall-cmd --list-all --zone docker
docker (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: docker0
sources:
services:
ports: 1-65535/tcp 1-65535/udp
protocols:
forward: yes
masquerade: no
forward-ports:
source-ports: 1-65535/tcp 1-65535/udp
icmp-blocks:
rich rules:
$ traceroute 172.17.0.2
traceroute to 172.17.0.2 (172.17.0.2), 30 hops max, 60 byte packets
1 172.17.0.2 (172.17.0.2) 0.105 ms 0.012 ms 0.010 ms
$ ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.086 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.105 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.076 ms
$ #docker ps
$ docker inspect $CONTAINERID -f "{{json .NetworkSettings.Networks }}"
{
"bridge": {
"IPAMConfig":null,
"Links":null,
"Aliases":null,
"NetworkID":"f3fd71c091459dead236d7edacf2fa24a24cfd0a2284a18ef66037c35c2d1d15",
"EndpointID":"e2cb07f288a96a2695fff002d03863226c0367b5c92050b1331919ec07186d09",
"Gateway":"172.17.0.1",
"IPAddress":"172.17.0.2",
"IPPrefixLen":16,
"IPv6Gateway":"",
"GlobalIPv6Address":"",
"GlobalIPv6PrefixLen":0,
"MacAddress":"02:42:ac:11:00:02",
"DriverOpts":null
}
}
$ docker network ls --no-trunc
f3fd71c091459dead236d7edacf2fa24a24cfd0a2284a18ef66037c35c2d1d15 bridge bridge local
[...]
In Ubuntu:
$ cat /proc/net/fib_trie
[...]
172.17.0.2
Location
https://docs.docker.com/engine/install/troubleshoot/
Suggestion (EDIT)
Docker should check/set the MTU automatically or the troubleshoot page should inform about this behaviour.
Looks like, this is an at least 5 years old, well known problem. The solution is having a look into ip a
or ifconfig
and to correct the MTU.
The problem is, that the Maximum Transmission Unit (MTU) of docker bridge is my case 1500, which is higher than the MTU if the Fedora host adapter with 1400. So, if the Ubuntu container sends a packet with more than 1400 bytes, the host adapter can't forward it, because it is too long.
To solve this issue, run dockerd --mtu 1400
or add it to your config (e.g. '/etc/docker/daemon.json'):
{
"mtu": 1400
}
Beware, that even after setting the MTU, ip a
and ifconfig
will show the original value!
- https://stackoverflow.com/questions/60038755/docker-bridge-network-tcp-restransmission
- https://sylwit.medium.com/how-we-spent-a-full-day-figuring-out-a-mtu-issue-with-docker-4d81fdfe2caf
- https://mlohr.com/docker-mtu/
- https://www.howtouselinux.com/post/check-mtu-size-in-linux
There hasn't been any activity on this issue for a long time.
If the problem is still relevant, mark the issue as fresh with a /remove-lifecycle stale
comment.
If not, this issue will be closed in 14 days. This helps our maintainers focus on the active issues.
Prevent issues from auto-closing with a /lifecycle frozen
comment.
/lifecycle stale
Closed issues are locked after 30 days of inactivity. This helps our team focus on active issues.
If you have found a problem that seems similar to this, please open a new issue.
/lifecycle locked