parca icon indicating copy to clipboard operation
parca copied to clipboard

parca.dev/debuginfod-client/0.18.0 nodes put a lot of unnecessary load on public debuginfod servers

Open fche opened this issue 2 years ago • 11 comments

debuginfod.elfutils.org is getting O(10Hz) requests from a moderate number of distinct GCE nodes advertising themselves as parca. That'd be fine if they were legitimate useful requests, but almost all of them are 404 not-found lookups that are repeated, sometimes several times in a second. The server is having to throttle these clients.

For example, over the last 3 days, we've received 1312974 requests for /buildid/c393c1f2a760a00a/debuginfo, all 404's. These really should be cached aggressively.

fche avatar Oct 18 '23 14:10 fche

For comparison, debuginfod.archlinux.org has gotten 55 requests for the same buildid since yesterday.

# journalctl --since yesterday -u debuginfod  | grep "c393c1f2a760a00a" | wc -l
55

55 requests for something serving 404 is less, but still not what I would claim is reasonable.

Foxboron avatar Oct 18 '23 15:10 Foxboron

We're very sorry for this. We dramatically improved the situation in 0.19.0, which is why I assume you're not seeing this very often with 0.19.0+. (all three of these only landed in 0.19 https://github.com/parca-dev/parca/pull/3413, https://github.com/parca-dev/parca/pull/2924, https://github.com/parca-dev/parca/pull/2847)

So while not great, I think this is going to get better with time as more people upgrade from 0.18 to newer versions.

Something additional that we could do (but it would take a little bit of time), is we (as in Polar Signals) could host a debuginfod server that caches upstream responses and make that endpoint the default in Parca, then if there is ever an issue like this we could at least try to fix things and cache more aggressively without having to wait for users to upgrade. That would also come with the issue that this would only become the default in 0.20+.

Let me know what you would like to see or if you have other suggestions, we of course want to play well with the ecosystem, and I apologize again for having created this problem in the first place.

brancz avatar Oct 19 '23 08:10 brancz

We've realized that the 55 requests Arch is getting is spill-over from the main debuginfod.elfutils.org proxy. So any negative cache hits are just forwarded to us (and probably other mirrors).

Foxboron avatar Oct 19 '23 08:10 Foxboron

Something additional that we could do (but it would take a little bit of time), is we (as in Polar Signals) could host a debuginfod server that caches upstream responses and make that endpoint the default in Parca, then if there is ever an issue like this we could at least try to fix things and cache more aggressively without having to wait for users to upgrade. That would also come with the issue that this would only become the default in 0.20+.

I think this sounds like a good idea. It would allow you more leeway to manage negative hits and ensure you are playing well with the upstream mirrors.

Foxboron avatar Oct 19 '23 08:10 Foxboron

Understood, so if all this traffic comes from clients running old code that cannot be retroactively reconfigured to use a server of your own, then there's not much either of us can do except wait for the user population to upgrade. ;-) OK, no problem, the server will protect itself as best it can, and we'll carry on.

fche avatar Oct 19 '23 14:10 fche

Thanks for understanding! :)

brancz avatar Oct 19 '23 15:10 brancz

I'll leave this open until we move to a Polar Signals managed debuginfod endpoint.

brancz avatar Oct 19 '23 15:10 brancz

By the way, we can install httpd-level redirects for these old 0.18 clients from debuginfod.elfutils.org to another server pretty easily.

fche avatar Oct 20 '23 02:10 fche

Fresher observations include a steady stream of parca traffic onto debuginfod.elfutils.org, which is great. One problem is that even successful fetches don't appear to be cached reliably on your side. e.g., for just one buildid, and over the course of one hour, we're seeing:

Jan 19 12:04:04 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:04:04 PM GMT] (3834/3835): 127.0.0.1:37828 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.230.188.104 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:10:32 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:10:32 PM GMT] (3834/3835): 127.0.0.1:50404 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:19:49 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:19:49 PM GMT] (3834/3835): 127.0.0.1:58460 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.230.180.226 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:19:51 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:19:51 PM GMT] (3834/3836): 127.0.0.1:58512 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:29:01 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:29:01 PM GMT] (3834/3835): 127.0.0.1:57992 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:37:59 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:37:59 PM GMT] (3834/3835): 127.0.0.1:56594 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.230.180.226 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:45:52 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:45:52 PM GMT] (3834/3836): 127.0.0.1:45524 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.230.180.226 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:55:58 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:55:58 PM GMT] (3834/3836): 127.0.0.1:40784 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms
Jan 19 12:55:59 elastic.org debuginfod[3834]: [Fri 19 Jan 2024 12:55:59 PM GMT] (3834/3835): 127.0.0.1:38540 UA:parca.dev/debuginfodclient/0.20.0 XFF:35.188.226.152 GET /buildid/ff69401f5b9593dd4b1d31d7f2ff1688f265e371/debuginfo 200 3367072 0+0ms

Note how both the .152 and the .226 machine fetch the same file multiple times over the same hour. The total data flow to parca appears to be on the order of 1 TB/week, which is right around our upper limit of tolerability. Can you check whether there's anything simple that can be done from your side for this case?

fche avatar Jan 19 '24 16:01 fche

We perform two requests, one to know whether it exists and one to actually download it. Can we do a head request instead or is there another way to inform whether a “future” request should work?

brancz avatar Jan 20 '24 17:01 brancz

I see more than two requests per IP address per hour for the same randomly chosen build-id, so something's not working quite that way.

By the way, what's the downside of directly asking for the item? If it doesn't exist, you'll be told pretty quickly. If it does, you'll get the file. The client can abort the download if it wishes.

By the way by the way, with env $DEBUGINFOD_MAXSIZE=1, the client can get a different rc for present vs. absent, though it may still take some processing time. It may be communicated to the server via the "X-DEBUGINFOD-MAXSIZE: 1" request header.

By the way by the way by the way, the forthcoming "metadata" debuginfod api extensions will be another way to query the contents of the server.

fche avatar Jan 20 '24 21:01 fche