jellyseerr icon indicating copy to clipboard operation
jellyseerr copied to clipboard

DNS spam

Open smrganic opened this issue 1 year ago • 7 comments

Description

There seems to be an issue that makes Jellyseerr spam my DNS with aaaa requests for my internal Jellyfin URL. Not sure why there is even a request for an aaaa record since I don't use ipv6.

I tested this on 1.7.0 because that is the version I had. Updated to 1.8.1 because I thought maybe it was fixed in the meantime but it wasn't.

Not sure what triggers this. Maybe it's a sync job. This has been going on for a while but I only just noticed because my primary DNS would ratelimit the IP for this machine just long enough for my secondary DNS to take over and serve the requests. Then the secondary DNS would ratelimit the IP just in time for the primary to take over again.

Version

1.7.0., 1.8.1

Steps to Reproduce

I used Pi hole to monitor traffic volume On the host where Jellyseerr is installed I used: sudo tcpdump -i ens18 'dst port 53' to monitor for outgoing DNS requests. I am running Jellyseerr in a docker container so I added this to my compose file extra_hosts: - "internal.mrga.dev:IP" This adds the local ip in the container hosts file but I don't like it as a permanent solution.

Screenshots

Screenshot 2024-04-25 163944 Screenshot 2024-04-25 164027 secondary dns Screenshot 2024-04-25 170603

Logs

Will add if needed.

Platform

desktop

Device

N/A

Operating System

N/A

Browser

N/A

Additional Context

No response

Code of Conduct

  • [X] I agree to follow Jellyseerr's Code of Conduct

smrganic avatar Apr 25 '24 15:04 smrganic

Related to #387?

fallenbagel avatar Apr 25 '24 15:04 fallenbagel

Also try this option: https://github.com/Fallenbagel/jellyseerr/issues/722#issuecomment-2067363186

fallenbagel avatar Apr 25 '24 15:04 fallenbagel

Related to #387?

Could very well be this. Can the internal IP be cached? This would almost never change so it would make sense to reuse the value insted of reaching out to DNS every time.

Why are there requests for a and aaaa records since I have not set up ipv6? Is that a Node quirk?

smrganic avatar Apr 25 '24 15:04 smrganic

Why are there requests for a and aaaa records since I have not set up ipv6? Is that a Node quirk?

It might be an alpine-node quirk. Try the option i sent on the other comment

fallenbagel avatar Apr 25 '24 15:04 fallenbagel

https://github.com/Fallenbagel/jellyseerr/issues/722#issuecomment-2067363186 tried this without the extra hosts in docker compose and ran a recently added scan. It did not help, a lot of DNS traffic was generated. This problem probably only gets worse with a large library and / or a full scan. This definitely needs caching. If this is deployed on a host with a lot of containers it could make other services unavailable indirectly due to DNS rate limiting.

So far only the extra hosts attribute helps.

smrganic avatar Apr 25 '24 18:04 smrganic

@Fallenbagel Just had the same issue on a fresh install. NodeJS does not cache DNS requests. When running on large libraries lots of calls to the Jellyfin server are done. Since the DNS lookup is not cached, it will reach for the DNS server for each call to resolve the hostname. Some DNS servers (like Pi-Hole) will throttle/block calls when a certain threshold of requests is reached in a limited timeframe. This is what generates these errors:

[Jellyfin API]: Something went wrong while getting library content from the Jellyfin server: getaddrinfo EAI_AGAIN jellyfin.example.com

Jellystat had the same issue, and it was fixed by implementing cacheable-lookup in axios. Maybe the same could be done for Jellyseerr?

simoncaron avatar May 23 '24 04:05 simoncaron

@Fallenbagel Just had the same issue on a fresh install. NodeJS does not cache DNS requests. When running on large libraries lots of calls to the Jellyfin server are done. Since the DNS lookup is not cached, it will reach for the DNS server for each call to resolve the hostname. Some DNS servers (like Pi-Hole) will throttle/block calls when a certain threshold of requests is reached in a limited timeframe. This is what generates these errors:

[Jellyfin API]: Something went wrong while getting library content from the Jellyfin server: getaddrinfo EAI_AGAIN jellyfin.example.com

Jellystat had the same issue, and it was fixed by implementing cacheable-lookup in axios. Maybe the same could be done for Jellyseerr?

Oh yeah. Actually I looked into axios-cached-dns-resolve and it might be better for jellyseerr

fallenbagel avatar May 23 '24 05:05 fallenbagel

:tada: This issue has been resolved in version 1.9.1 :tada:

The release is available on:

Your semantic-release bot :package::rocket:

fallenbagel avatar Jun 12 '24 06:06 fallenbagel

Recently tested this again on 1.9.2. a full library scan still generates enough traffic to be rate limited by DNS server. This takes down all other services that need name resolution on that server. Was this implemented then removed? https://github.com/Fallenbagel/jellyseerr/commit/2f7cfa35335f982d23929adb49b8811af754eeb1

smrganic avatar Aug 22 '24 17:08 smrganic

Recently tested this again on 1.9.2. a full library scan still generates enough traffic to be rate limited by DNS server. This takes down all other services that need name resolution on that server. Was this implemented then removed? https://github.com/Fallenbagel/jellyseerr/commit/2f7cfa35335f982d23929adb49b8811af754eeb1

#837

fallenbagel avatar Aug 22 '24 18:08 fallenbagel

Any thoughts of remediating this behavior? Seeing 40k+ dns requests per 24 hours just from jellyfin, seems a bit excessive..

Langelus avatar Sep 19 '24 19:09 Langelus

Any thoughts of remediating this behavior? Seeing 40k+ dns requests per 24 hours just from jellyfin, seems a bit excessive..

I get close to 100k per day. This issue really shouldn't be closed... Until this is fixed you can manually add an extra hosts entry in the /etc/hosts file of the machine that is running Jellyseerr or if you're using docker compose add extra_hosts: - "FQDN:IP"

smrganic avatar Sep 21 '24 10:09 smrganic

Any thoughts of remediating this behavior? Seeing 40k+ dns requests per 24 hours just from jellyfin, seems a bit excessive..

I get close to 100k per day. This issue really shouldn't be closed... Until this is fixed you can manually add an extra hosts entry in the /etc/hosts file of the machine that is running Jellyseerr or if you're using docker compose add extra_hosts: - "FQDN:IP"

This was closed automatically when we added the dns caching. It did not work properly (it did cache if you had properly set the ttl, but it then created other dns issues), so we were forced to remove it. Dns spam was not as much of a problem as the issues that were brought about when we added dns caching introduced back then.

That doesn't mean we are not trying to find a way to fix this.

Besides, nodejs does not implement a native dns caching, so until we can find a solution that does not create other issues, i am afraid this issue will have to be in the backlog for now

fallenbagel avatar Sep 21 '24 15:09 fallenbagel