DNS spam
Description
There seems to be an issue that makes Jellyseerr spam my DNS with aaaa requests for my internal Jellyfin URL. Not sure why there is even a request for an aaaa record since I don't use ipv6.
I tested this on 1.7.0 because that is the version I had. Updated to 1.8.1 because I thought maybe it was fixed in the meantime but it wasn't.
Not sure what triggers this. Maybe it's a sync job. This has been going on for a while but I only just noticed because my primary DNS would ratelimit the IP for this machine just long enough for my secondary DNS to take over and serve the requests. Then the secondary DNS would ratelimit the IP just in time for the primary to take over again.
Version
1.7.0., 1.8.1
Steps to Reproduce
I used Pi hole to monitor traffic volume
On the host where Jellyseerr is installed I used: sudo tcpdump -i ens18 'dst port 53' to monitor for outgoing DNS requests.
I am running Jellyseerr in a docker container so I added this to my compose file
extra_hosts: - "internal.mrga.dev:IP"
This adds the local ip in the container hosts file but I don't like it as a permanent solution.
Screenshots
Logs
Will add if needed.
Platform
desktop
Device
N/A
Operating System
N/A
Browser
N/A
Additional Context
No response
Code of Conduct
- [X] I agree to follow Jellyseerr's Code of Conduct
Related to #387?
Also try this option: https://github.com/Fallenbagel/jellyseerr/issues/722#issuecomment-2067363186
Related to #387?
Could very well be this. Can the internal IP be cached? This would almost never change so it would make sense to reuse the value insted of reaching out to DNS every time.
Why are there requests for a and aaaa records since I have not set up ipv6? Is that a Node quirk?
Why are there requests for a and aaaa records since I have not set up ipv6? Is that a Node quirk?
It might be an alpine-node quirk. Try the option i sent on the other comment
https://github.com/Fallenbagel/jellyseerr/issues/722#issuecomment-2067363186 tried this without the extra hosts in docker compose and ran a recently added scan. It did not help, a lot of DNS traffic was generated. This problem probably only gets worse with a large library and / or a full scan. This definitely needs caching. If this is deployed on a host with a lot of containers it could make other services unavailable indirectly due to DNS rate limiting.
So far only the extra hosts attribute helps.
@Fallenbagel Just had the same issue on a fresh install. NodeJS does not cache DNS requests. When running on large libraries lots of calls to the Jellyfin server are done. Since the DNS lookup is not cached, it will reach for the DNS server for each call to resolve the hostname. Some DNS servers (like Pi-Hole) will throttle/block calls when a certain threshold of requests is reached in a limited timeframe. This is what generates these errors:
[Jellyfin API]: Something went wrong while getting library content from the Jellyfin server: getaddrinfo EAI_AGAIN jellyfin.example.com
Jellystat had the same issue, and it was fixed by implementing cacheable-lookup in axios. Maybe the same could be done for Jellyseerr?
@Fallenbagel Just had the same issue on a fresh install. NodeJS does not cache DNS requests. When running on large libraries lots of calls to the Jellyfin server are done. Since the DNS lookup is not cached, it will reach for the DNS server for each call to resolve the hostname. Some DNS servers (like Pi-Hole) will throttle/block calls when a certain threshold of requests is reached in a limited timeframe. This is what generates these errors:
[Jellyfin API]: Something went wrong while getting library content from the Jellyfin server: getaddrinfo EAI_AGAIN jellyfin.example.comJellystat had the same issue, and it was fixed by implementing cacheable-lookup in axios. Maybe the same could be done for Jellyseerr?
Oh yeah. Actually I looked into axios-cached-dns-resolve and it might be better for jellyseerr
:tada: This issue has been resolved in version 1.9.1 :tada:
The release is available on:
-
v1.9.1 - GitHub release
Your semantic-release bot :package::rocket:
Recently tested this again on 1.9.2. a full library scan still generates enough traffic to be rate limited by DNS server. This takes down all other services that need name resolution on that server. Was this implemented then removed? https://github.com/Fallenbagel/jellyseerr/commit/2f7cfa35335f982d23929adb49b8811af754eeb1
Recently tested this again on 1.9.2. a full library scan still generates enough traffic to be rate limited by DNS server. This takes down all other services that need name resolution on that server. Was this implemented then removed? https://github.com/Fallenbagel/jellyseerr/commit/2f7cfa35335f982d23929adb49b8811af754eeb1
#837
Any thoughts of remediating this behavior? Seeing 40k+ dns requests per 24 hours just from jellyfin, seems a bit excessive..
Any thoughts of remediating this behavior? Seeing 40k+ dns requests per 24 hours just from jellyfin, seems a bit excessive..
I get close to 100k per day. This issue really shouldn't be closed... Until this is fixed you can manually add an extra hosts entry in the /etc/hosts file of the machine that is running Jellyseerr or if you're using docker compose add extra_hosts: - "FQDN:IP"
Any thoughts of remediating this behavior? Seeing 40k+ dns requests per 24 hours just from jellyfin, seems a bit excessive..
I get close to 100k per day. This issue really shouldn't be closed... Until this is fixed you can manually add an extra hosts entry in the /etc/hosts file of the machine that is running Jellyseerr or if you're using docker compose add extra_hosts: - "FQDN:IP"
This was closed automatically when we added the dns caching. It did not work properly (it did cache if you had properly set the ttl, but it then created other dns issues), so we were forced to remove it. Dns spam was not as much of a problem as the issues that were brought about when we added dns caching introduced back then.
That doesn't mean we are not trying to find a way to fix this.
Besides, nodejs does not implement a native dns caching, so until we can find a solution that does not create other issues, i am afraid this issue will have to be in the backlog for now