Hostname not resolvable inside related containers
Issue Description
When you use --hostname on podman run, it's expected to hostname filled be resolvable inside the container itself and other containers that connect with this container. Actually the hostname is resolvable only inside container itself.
Based on discuss in podman-compose repo. https://github.com/containers/podman-compose/discussions/730
Steps to reproduce the issue
Steps to reproduce the issue
- Execute
podman-compose up -din a project that hostname defined is different from service name. - Enter in one of containers and try to ping each other by hostname, not service name.
Describe the results you received
Related container hostname not resolvable.
/var/www/html # hostname
superweb2
/var/www/html # ping superweb1
ping: bad address 'superweb1'
Describe the results you expected
Expected to ping related container by their hostname
/var/www/html # hostname
superweb2
/var/www/html # ping superweb1
PING superweb1 (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.474 ms
podman info output
host:
arch: arm64
buildahVersion: 1.30.0
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.7-2.fc38.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.7, commit: '
cpuUtilization:
idlePercent: 99.92
systemPercent: 0.04
userPercent: 0.03
cpus: 1
databaseBackend: boltdb
distribution:
distribution: fedora
variant: coreos
version: "38"
eventLogger: journald
hostname: localhost.localdomain
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 1000000
uidmap:
- container_id: 0
host_id: 501
size: 1
- container_id: 1
host_id: 100000
size: 1000000
kernel: 6.3.12-200.fc38.aarch64
linkmode: dynamic
logDriver: journald
memFree: 1591521280
memTotal: 2048544768
networkBackend: netavark
networkBackendInfo:
backend: ""
dns: {}
ociRuntime:
name: crun
package: crun-1.8.5-1.fc38.aarch64
path: /usr/bin/crun
version: |-
crun version 1.8.5
commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
rundir: /run/user/501/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: ""
package: ""
version: ""
remoteSocket:
exists: true
path: /run/user/501/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-12.fc38.aarch64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.3
swapFree: 0
swapTotal: 0
uptime: 63h 26m 37.00s (Approximately 2.62 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /var/home/core/.config/containers/storage.conf
containerStore:
number: 2
paused: 0
running: 2
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/home/core/.local/share/containers/storage
graphRootAllocated: 106769133568
graphRootUsed: 3754766336
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /run/user/501/containers
transientStore: false
volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
APIVersion: 4.5.1
Built: 1685123899
BuiltTime: Fri May 26 18:58:19 2023
GitCommit: ""
GoVersion: go1.20.4
Os: linux
OsArch: linux/arm64
Version: 4.5.1
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
Additional environment details
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
podman-compose is not part of Podman.
Host name dns resolution only happens when a separate network is setup for containers, this should happen by default when uisng docker-compose/podman-compose.
@rhatdan Okay, I just mentioned podman-compose here because it's a "wrapper" running podman internally.
About the hostname, I did this test.
podman network create my_networt_test
podman run \
--name=my_container_test_web1 \
-d \
-v /Users/dev/source/opensource/other/containers__podman-compose/tests/nets_test1/test1.txt:/var/www/html/index.txt:ro \
--net my_networt_test \
--network-alias web1,web1.alias \
-p 38001:8001 \
-w /var/www/html \
--hostname web1.hostname busybox /bin/busybox httpd -f -h /var/www/html -p 8001
podman run \
--name=my_container_test_web2 \
-d \
-v /Users/dev/source/opensource/other/containers__podman-compose/tests/nets_test1/test2.txt:/var/www/html/index.txt:ro \
--net my_networt_test \
--network-alias web2,web2.alias \
-p 38002:8001 \
-w /var/www/html \
--hostname web2.hostname busybox /bin/busybox httpd -f -h /var/www/html -p 8001
Inside this container my_container_test_web2 I executed this commands.
/var/www/html # hostname
web2.hostname
/var/www/html # nslookup web2
Server: 10.89.4.1
Address: 10.89.4.1:53
Non-authoritative answer:
Name: web2.dns.podman
Address: 10.89.4.6
Non-authoritative answer:
/var/www/html # nslookup web2.alias
Server: 10.89.4.1
Address: 10.89.4.1:53
Non-authoritative answer:
Name: web2.alias
Address: 10.89.4.6
Non-authoritative answer:
/var/www/html # nslookup web2.hostname
Server: 10.89.4.1
Address: 10.89.4.1:53
Non-authoritative answer:
** server can't find web2.hostname: NXDOMAIN
/var/www/html # nslookup web1
Server: 10.89.4.1
Address: 10.89.4.1:53
Non-authoritative answer:
Name: web1.dns.podman
Address: 10.89.4.5
Non-authoritative answer:
/var/www/html # nslookup web1.alias
Server: 10.89.4.1
Address: 10.89.4.1:53
Non-authoritative answer:
Name: web1.alias
Address: 10.89.4.5
Non-authoritative answer:
/var/www/html # nslookup web1.hostname
Server: 10.89.4.1
Address: 10.89.4.1:53
Non-authoritative answer:
** server can't find web1.hostname: NXDOMAIN
On container my_container_test_web2, the itself hostname and aliases as resolved. From my_container_test_web1, only aliases was resolved, hostname not.
My doubt is, is this expected behavior or it's a bug?
I'm also running into this issue while evaluating podman for my team's eventual replacement of docker. I may have found a possible contributing factor.
Since I installed podman with apt, my version is a little behind. In particular, the CNI plugins that come with the ubuntu (wsl2) package are way behind. I kept getting warnings from several plugins that they don't "support config version 1.0.0," so I followed these instructions to get newer versions and fixed that error. If you look carefully at that list of plugins, you may notice that the dnsname plugin is missing.
I moved dnsname from the original plugins back into its proper location and finally got passed an error that was caused by not being able to resolve another service's network name, though with a difference from docker compose. Under docker, I could use <service_name>.<network_name>, but under podman I must instead use just <service_name> (a mystery, since I've deliberately not changed my compose file from its known-good state with docker). It also seems like there's something of a delay before external DNS names can be resolved after rebuilding the container; my service would crash on the first connection outside the local network, but if I let it sit for a while and try again the same name gets resolved without issue.
In addition to dnsname, the flannel plugin is also not included in the CNI distribution. I don't know what flannel does, but thought I'd mention it just in case.
Edit: I've noticed that ~/.config/cni/net.d/app.conflist has "domainName": "dns.podman" in the dnsname options, and I can resolve <service_name>.dns.podman inside a container. This might not be a "bug," but it is a significant difference from docker compose. I have a network defined similar to this:
networks:
myapp:
driver: "bridge"
name: "myapp" # because having "my_app-myapp" as the tld is redundant and annoying
Under docker, the TLD is the network's name, if set, else it's <project_name>-<network_id>. Maybe it's more precise to say that the TLD is whatever is listed in the docker inspect output under Name. This behavior is what I was expecting, but I grant that it's not declared in the spec, so it might not be a bug in the truest sense (but it is "wrong," imo).
I haven't tried yet, but I assume I can workaround this by setting domainName in the conflist file. That's... fine, I guess, but it's absolutely going to create friction in moving my team from docker. My ideal would be for podman-compose to match the behavior of docker compose, whether the spec says it must or not.