aardvark-dns icon indicating copy to clipboard operation
aardvark-dns copied to clipboard

dns request failed: request timed out

Open MartinX3 opened this issue 2 years ago • 27 comments

I'm using arch linux, so the packages should have the newest version.

I'm using firewallD and rootless podman with netavark and aardvark-dns.

I understand, that rootless podman with netavark won't manage my firewallD, but I would like to know which rules I need to activate to avoid the spam in my journal. And if the rule need to be in my loopback or network interface. (Also if it is enough to allow communication with the host instead of having an open port in the internet.

My dns resolver is systemd-resolved

$ ls -lha /etc/resolv.conf
lrwxrwxrwx 1 root root 39 31. Okt 10:22 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf

My journal spam: aardvark-dns[6156]: 21433 dns request failed: request timed out

The rootless container itself can ping to google.com. I didn't test if they can ping to a container dns name.

MartinX3 avatar Oct 31 '22 09:10 MartinX3

Ah, after turning /etc/resolv.conf again into a symlink of systemd-resolved sudo ln -rsf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf the messages disappear. I assume it wants now to call the localhost ip of the systemd-resolved instead of the network ip of the outside resolver.

I don't know how I could let this plugin verbose debug logging into the journal, but I'm glad that it is fixed now.

MartinX3 avatar Oct 31 '22 10:10 MartinX3

And it's back dns request failed: request timed out

It seems to happen if I execute nslookup in a container. But I get a result. Weird.

MartinX3 avatar Nov 01 '22 09:11 MartinX3

I detected the same problem on my raspberry pi with Fedora and rootless podman containers.

Szwendacz99 avatar Dec 27 '22 10:12 Szwendacz99

Having the exact same issue. DNS lookup is extremely unstable, only two thirds of lookups work.

This is on Fedora Silverblue 37 using rootless containers and the new networking stack.

jmaris avatar Dec 28 '22 12:12 jmaris

Now using a nextcloud container which communicates with a PostgreSQL and a LDAP container. I access the nextcloud container with a nginx reverse proxy container from the internet.

1/3 of the nextcloud web interface results into a 502 bad gateway, because of the unstable DNS. And my logs are getting spammed with

aardvark-dns[3617]: 50310 dns request failed: request timed out

I hope the next podman release will include the new network stack which hopefully fixes this issue.

MartinX3 avatar Dec 29 '22 18:12 MartinX3

It is impossible to help with these issues as reporters did not provide versions for podman, netavark, and aarvark-dns. please provide as much relevant information as possible.

baude avatar Dec 30 '22 13:12 baude

podman: 4.3.1 netavark: 1.4.0 aardvark-dns: 1.4.0

MartinX3 avatar Dec 30 '22 13:12 MartinX3

would you say your machine/vm is high performent or maybe has slow IO/processor/RAM limitations? I'm trying to understand if you might have a race.

baude avatar Dec 30 '22 13:12 baude

The first bare metal server server with this issue has the HDD connected to SATA with the server. CPU: Intel Xeon E3-1231 v3 - 3.4 GHz - 4 core(s) RAM: 32GB - DDR3

The second bare metal server has a 870 Evo SSD connected via SATA to the server. CPU: AMD Phenom 2 955 X4 3.2 GHz 4 cores RAM: 6GB - DDR2

The second server spams this issue many times. The first server spams it less often but also regularly.

MartinX3 avatar Dec 30 '22 14:12 MartinX3

@flouthoc wdyt?

baude avatar Dec 30 '22 14:12 baude

Now my faster server spammed it at night while the server pods weren't used by clients.

MartinX3 avatar Dec 31 '22 12:12 MartinX3

We used to see similar issues in older versions of netavark and aardvark in Podman CI as well but it was fixed in newer versions with https://github.com/containers/aardvark-dns/pull/220 but I guess there might be some issue which is not being reproduced in our CI, I'll try to reproduce this locally and see if i can reproduce this.

flouthoc avatar Jan 01 '23 07:01 flouthoc

I'm seeing the same problems on a freshly installed Fedora Server 37 instance as well. The machine is basically completely idling with no load and my journal is still filled with these errors. This issue was not present before 2022, and I've ran similar setups on slower machines without any DNS lookup issues.

podman: 4.3.1 netavark: 1.4.0 aardvark-dns: 1.4.0

vulpes2 avatar Jan 31 '23 23:01 vulpes2

Can you test with v1.5?

Luap99 avatar Mar 06 '23 14:03 Luap99

I think the timeout is fixed. But now sometimes I get an "empty dns response" on all machines.

MartinX3 avatar Mar 06 '23 15:03 MartinX3

Does this cause problems for the container or is just an error that is logged often?

Luap99 avatar Mar 06 '23 15:03 Luap99

It's logged often with long breaks between. So it appears in groups most of the time.

I think the services just repeat the DNS action again until it works. So I would say it just consumes CPU time and maybe network speed?

Maybe I don't use it long enough to see long term errors.

MartinX3 avatar Mar 06 '23 16:03 MartinX3

Do you have a simple reproducer? What kind of application are you running and how many dns request does it make?

Luap99 avatar Mar 06 '23 16:03 Luap99

Tested with v1.5 and I'm getting a lot of dns request got empty response as well. Here's a list of all the containers I'm running on my system:

  • eclipse-mosquitto:2
  • koenkk/zigbee2mqtt
  • homeassistant/home-assistant:stable

It's notable that none of these containers are particularly demanding on the hardware, and my system load average is generally below 0.1 at all times.

vulpes2 avatar Mar 10 '23 17:03 vulpes2

It happens without workload.

The server just runs

  • mailrise
  • swag & duckdns
  • borg-backup-server

MartinX3 avatar Mar 11 '23 09:03 MartinX3

I'm experiencing the same issue with aardvark-dns reporting lots of dns request got empty response errors in my logs. It seems to be causing problems for at least the containers running Uptime Kuma, Invidious and Jellyseerr. Uptime Kuma starts throwing ECONNRESET when doing GET requests, and Invidious and Jellyseerr similarly start to have their requests fail, with external content taking a long time to load, if at all.

It happens with both 1.5.0 and the latest 1.6.0 from podman-next. For me it seems to start after around 3 days of uptime. I've tried changing machines and switching from onboard Realtek to an Intel i350-T2 controller, but both to no avail. Rebooting solves the issue, until uptime reaches 3 days again.

soiboi avatar Apr 04 '23 12:04 soiboi

Just ran into this issue as well with Nextcloud + Nginx Proxy Manager. What's funny is that I am using the same docker-compose setup on two different servers and one works fine while the other one doesn't. The only difference is that the one that is breaking isn't publicly accessible on the Internet and is instead setup to respond over a .lan domain which is configured on the home router. NPM has a proxy host setup that responds to mydomain.lan and redirects it to the nextcloud container.

It will work for a bit when I up/down NPM, but then eventually fail after a few hours or even days with 502 bad gateway errors, and dns request got empty response starts getting spammed into journalctl.

My setup

  • Arch Linux linux-lts 6.6.18-1
  • podman: 4.9.3-1
  • podman-docker: 4.9.3-1
  • docker-compose: 2.24.6-1
  • netavark: 1.10.3-1
  • aardvark-dns: 1.10.0-1

Here are my docker-compose files to set up each of them (rootful btw): Nextcloud docker-compose.yml

   version: '3'
   
   services:
     db:
       image: mariadb
       command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
       restart: always
       volumes:
         - ./db:/var/lib/mysql
       environment:
         - MYSQL_ROOT_PASSWORD=<pw here>
         - MARIADB_AUTO_UPGRADE=1
         - MARIADB_DISABLE_UPGRADE_BACKUP=1
       env_file:
         - db.env
       networks:
         - backend
   
     redis:
       image: redis:alpine
       restart: always
       networks:
         - backend
   
     nextcloud:
       image: nextcloud:apache
       restart: always
       volumes:
         - ./html:/var/www/html
       environment:
         - MYSQL_HOST=db
         - REDIS_HOST=redis
       env_file:
         - db.env
       depends_on:
         - db
         - redis
       networks:
         - nextcloud_frontend
         - backend
   
     cron:
       image: nextcloud:apache
       restart: always
       volumes:
         - ./html:/var/www/html
       entrypoint: /cron.sh
       depends_on:
         - db
         - redis
       networks:
         - backend
   
   networks:
     nextcloud_frontend:
       external: true
     backend:

db.env

   MYSQL_PASSWORD=<pw here>
   MYSQL_DATABASE=nextcloud
   MYSQL_USER=nextcloud

Nginx Proxy Manager docker-compose.yml

   version: '3.8'
   services:
     proxy:
       image: 'jc21/nginx-proxy-manager:latest'
       restart: always
       ports:
         # These ports are in format <host-port>:<container-port>
         - '80:80' # Public HTTP Port
         - '443:443' # Public HTTPS Port
         - '81:81' # Admin Web Port
         # Add any other Stream port you want to expose
         # - '21:21' # FTP
   
       # Uncomment the next line if you uncomment anything in the section
       # environment:
         # Uncomment this if you want to change the location of
         # the SQLite DB file within the container
         # DB_SQLITE_FILE: "/data/database.sqlite"
   
         # Uncomment this if IPv6 is not enabled on your host
         # DISABLE_IPV6: 'true'
   
       healthcheck:
         test: ["CMD", "/bin/check-health"]
         interval: 30s
         timeout: 3s
   
       volumes:
         - ./data:/data
         - ./letsencrypt:/etc/letsencrypt
   
       networks:
         - nextcloud_frontend
   
   networks:
     nextcloud_frontend:
       external: true

urbenlegend avatar Feb 26 '24 17:02 urbenlegend