shuffledns icon indicating copy to clipboard operation
shuffledns copied to clipboard

massdns error file: /tmp/shuffledns

Open oxben10 opened this issue 5 months ago • 5 comments

When using shuffledns with the -mode resolve flag, the tool runs for an extremely long time (over 4 days now) and produces no output at all.

root@hostname:~/targets/test# shuffledns -d test.com -l dns.txt -r ~/wordlists/resolvers.txt -mode resolve  -o dns-test.txt 

       __        ________        __       
  ___ / /  __ __/ _/ _/ /__  ___/ /__ ___
 (_-</ _ \/ // / _/ _/ / -_)/ _  / _ \(_-<
/___/_//_/\_,_/_//_//_/\__/ \_,_/_//_/___/

                projectdiscovery.io

[INF] Current shuffledns version v1.1.0 (latest)
[INF] Executing massdns on test.com
[INF] using massdns output directory: /tmp/shuffledns-549633477
[INF] massdns output file: /tmp/shuffledns-549633477/massdns-stdout-946412799
[INF] massdns error file: /tmp/shuffledns-549633477/massdns-stderr-393195684
[INF] Massdns execution took 1h34m10.855093208s
[INF] Started parsing massdns output

root@hostname:~/targets/test# wc -l dns.txt 
30744982 dns.txt
root@hostname:~/targets/test# 

oxben10 avatar Aug 01 '25 23:08 oxben10

I have the same issue, and I think I've traced the error to the source. If you investigate /tmp/shuffledns-549633477/massdns-stdout-946412799, you'll see it's probably over 1m lines and over 100MB. Mine was 10,952,083 lines long and 451M. I think shuffledns/massdns failed to remove a wildcard domain, as the stdout file contained 2,145,765 records that all CNAME'd to the same host (and 4 IPs). This closely correlates to how many subdomains I was bruting. The root error is probably a wildcard/cname cleaning bug. But the failsafe could be implemented in pkg/massdns/process.go, parseMassDNSOutputFile(...). I'd recommend something like returning a parsing error note if the input file is over 20MB (or whatever is well-above normal), etc. Looks like the process is hanging due to a LevelDB resource exhaustion issue caused by this function.

minispooner avatar Sep 24 '25 16:09 minispooner

this file size check should safely handle errors (but doesn't solve root issue). lmk if you want me to PR it. I haven't tested it, and I'm not sure what abnormal sizing looks like. unfortunately I don't have a ton of time to test. hope this helps

// Check file size before processing to prevent DoS scenarios
fileInfo, fileErr := os.Stat(tmpFile)
if fileErr != nil {
	return fmt.Errorf("could not conduct size safety-check on massdns output file %s: %w", tmpFile, fileErr)
}

const maxFileSize = 50 * 1024 * 1024 // 50MB limit
if fileInfo.Size() > maxFileSize {
	return fmt.Errorf("massdns output file %s too large: %d bytes (max allowed: %d bytes). Use a smaller file or split it into chunks", tmpFile, fileInfo.Size(), maxFileSize)
}

minispooner avatar Sep 24 '25 16:09 minispooner

I have a solution working locally that properly parses the file, even if huge. for example, parsing my 450MB file now takes 10s using a streamer and uses golang wildcard vars instead of the LevelDB. it successfully removed about 2million wildcards and gave me the 25ish legit domains. I'm testing it over several domains and will submit a PR within the next few days to fix this.

minispooner avatar Sep 30 '25 19:09 minispooner

after looking deeper into the code, I found a few more bugs within the wildcard removal feature. I've opted to create my own massdns results parser, so I won't be pushing any fixes here, but here are the wildcard removal bugs:

  • massdns defaults to 0.0.0.0 when there's a dns issue, so those results should be excluded from results. may as well also remove 127.0.0.1. probably a good idea to keep internal IPs though for various reasons (use, intel, etc)
  • when using many resolvers, some appear to be broken and return a static IP for every query, so you'll find hundreds of random subs that have been bruted, but all point to the same IP. running dig on many of those results in NXDOMAINS. so if you have more than 10 domains pointing to a single IP, it's a good idea to check a few for NXDOMAIN. if you get several NXDOMAINS, then it was probably a broken resolver and all those records should be discarded.
  • after removing the wildcards, the parser doesn't add 1 back in. I know we don't want 2mil wildcard domains, but we do want at least 1 to scan, so add 1 back in after removing the wildcard domains.

unfortunately I won't be able to add all these into a PR. I got partway there and was planning to PR, but I just kept finding areas of improvement and decided to make my own wrapper. sharing here to help the community out. best of luck!

minispooner avatar Oct 08 '25 06:10 minispooner

The problem was solved for me when I deleted the entire tool and reinstalled it, but you’re absolutely right. Thx bro

oxben10 avatar Oct 09 '25 21:10 oxben10