dnsproxy icon indicating copy to clipboard operation
dnsproxy copied to clipboard

upstream memory leak

Open lluu131 opened this issue 1 year ago • 15 comments

image QUIC upstream

image UDP upstream

For the same configuration, the memory footprint of QUIC upstream is very high and constantly increasing, but UDP upstream is very low, both without caching

lluu131 avatar Dec 29 '23 01:12 lluu131

image 2 hours later

lluu131 avatar Dec 29 '23 02:12 lluu131

@lluu131, hello and thanks for the thorough report. Unfortunately, we can't reproduce the leak. It would really help us to troubleshoot this issue if you could collect a goroutines profile for us.

To perform that, restart the dnsproxy service with profiling enabled. To enable it, use the --pprof CLI option, or set pprof: true in the YAML configuration file. When the memory grows to the suspicious level again, use the following command:

curl "http://127.0.0.1:6060/debug/pprof/goroutine?debug=1" > profile.txt

Or just follow the "http://127.0.0.1:6060/debug/pprof/goroutine?debug=1" URL with your web browser.

Note that profiles could only be accessed on the same host machine.

You can send the resulting profile to our [email protected].

EugeneOne1 avatar Jan 16 '24 13:01 EugeneOne1

all-servers: yes #fastest-addr: yes Memory is fine after commenting out fastest-addr

lluu131 avatar Jan 19 '24 04:01 lluu131

屏幕截图 2024-01-22 205438 Debug re-collected, just found out that memory increases massively when the quic server network is unreachable, but doesn't free up or diminish when it recovers

lluu131 avatar Jan 22 '24 12:01 lluu131

@EugeneOne1 EugeneOne1 Profile.txt has been sent by e-mail

lluu131 avatar Jan 22 '24 13:01 lluu131

@lluu131, hello and thanks for the thorough report. Unfortunately, we can't reproduce the leak. It would really help us to troubleshoot this issue if you could collect a goroutines profile for us.

To perform that, restart the dnsproxy service with profiling enabled. To enable it, use the --pprof CLI option, or set pprof: true in the YAML configuration file. When the memory grows to the suspicious level again, use the following command:

curl "http://127.0.0.1:6060/debug/pprof/goroutine?debug=1" > profile.txt

Or just follow the "http://127.0.0.1:6060/debug/pprof/goroutine?debug=1" URL with your web browser.

Note that profiles could only be accessed on the same host machine.

You can send the resulting profile to our [email protected].

Profile.txt has been sent by e-mail

lluu131 avatar Jan 22 '24 13:01 lluu131

@lluu131, hello again. Thank you for your help, the profile clarified the issue for us. We've pushed the patch (v0.63.1) that may improve the situation. Could you please check if it does?

If the issue persists, wouldn't you mind to collect the profile again? We'd also like to take a look at the verbose log (verbose: true in YAML configuration) if it's possible to collect it.

EugeneOne1 avatar Jan 23 '24 14:01 EugeneOne1

@lluu131, hello again. Thank you for your help, the profile clarified the issue for us. We've pushed the patch (v0.63.1) that may improve the situation. Could you please check if it does?

If the issue persists, wouldn't you mind to collect the profile again? We'd also like to take a look at the verbose log (verbose: true in YAML configuration) if it's possible to collect it.

Already done with client and server updates, I noticed from verbose that the client is requesting root dns every second, is this normal??

image

lluu131 avatar Jan 24 '24 01:01 lluu131

@lluu131, hello again. Thank you for your help, the profile clarified the issue for us. We've pushed the patch (v0.63.1) that may improve the situation. Could you please check if it does?

If the issue persists, wouldn't you mind to collect the profile again? We'd also like to take a look at the verbose log (verbose: true in YAML configuration) if it's possible to collect it.

Tested for a few hours, memory increases after quic upstream interruptions, memory stops increasing after upstream resumes (but won't be freed), some improvement compared to the previous constant increase, but there is still a problem, the relevant logs were sent via email

lluu131 avatar Jan 24 '24 02:01 lluu131

It looks worse.

image

lluu131 avatar Jan 24 '24 12:01 lluu131

@lluu131, we've received the data. Thank you for your help.

EugeneOne1 avatar Jan 24 '24 13:01 EugeneOne1

@lluu131, we've been investigating some unusual concurrency patterns used in the DNS-over-QUIC code, and found that the dependency responsible for handling QUIC protocol probably contains the bug (quic-go/quic-go#4303). Anyway, we should come up with some workaround in the meantime.

EugeneOne1 avatar Feb 02 '24 14:02 EugeneOne1

图片

cost 10G after running 66 day

图片

this machine only run all my dns server.

the config is :


[Unit]
Description=dnsproxy Service
Requires=network.target
After=network.target

[Service]
Type=simple
User=jeremie
Restart=always
AmbientCapabilities=CAP_NET_BIND_SERVICE
ExecStart=/usr/bin/dnsproxy -l  0.0.0.0 -p 5353 \
                                                   --all-servers \
                                                   -f tls://1.1.1.1 \
                                                   -u sdns://AgcAAAAAAAAABzEuMC4wLjGgENk8mGSlIfMGXMOlIlCcKvq7AVgcrZxtjon911-ep0cg63Ul-I8NlFj4GplQGb_TTLiczclX57DvMV8Q-JdjgRgSZG5zLmNsb3VkZmxhcmUuY29tCi9kbnMtcXVlcnk \
                                                   -f https://1.1.1.1/dns-query \
                                                   -u https://1.0.0.1/dns-query \
                                                   -u https://dns.google/dns-query \
                                                   -u https://1.0.0.1/dns-query \
                                                   -u https://mozilla.cloudflare-dns.com/dns-query \
                                                   -u https://dns11.quad9.net/dns-query \
                                                   -u https://dns10.quad9.net/dns-query \
                                                   -u https://dns.quad9.net/dns-query \
                                                   --http3 \
                                                   --bootstrap=1.0.0.1:53



[Install]
WantedBy=multi-user.target

@EugeneOne1

Lyoko-Jeremie avatar Feb 13 '24 00:02 Lyoko-Jeremie

I've observed a memory leak issue in my home environment. I am using DoH. When I configured a wrong DoH API URL, the system reported an out of memory error of the dnsproxy process.

I am using the docker version of adguard/dnsproxy.

ir1ka avatar May 08 '24 09:05 ir1ka

Update:There are many query errors in my log. It seems that when an upstream query error occurs (such as the network is temporarily unavailable), the memory will increase until out of memory.

ir1ka avatar May 09 '24 02:05 ir1ka