changeme
changeme copied to clipboard
stucks on big input file
i have input file with 90 000 hosts and scanner just stucks and doing nothing even when i allowing 100 threads with "-t" flag
wow, 90k is a lot of hosts. I don't think I've run anything near that large. My suspicion is that you might be running into a memory issue or it's still chugging through creating all of the fingerprinting permutations and loading it into the queue.
Here's a few things to check:
- Is Redis up and running? Redis performs much better than the python in-memory queue
- Open
top
to monitor the system memory and cpu performance - Start changeme with the
--debug
flag
Please send a screenshot of top and the last few lines of the debug output and I can see if I can track it down further.
Also if you're doing that large of a scan, it may be advantageous to break it up into smaller chunks like 5k hosts. And even spread those smaller chunks amongst multiple servers if that's an option to get more parallelism.
Hi @ztgrace,
This is also my problem ( check Line 326 to 335 here ). Since I'm not using nmap scan xml output instead I am using masscan output. gather all Ip address and port (e.g: 123.123.123.123:1337)
and loop into bash script however, I got always stuck and manually forces to kill the changeme process and then it will continue to scan the other ip address with port.
The other problem is, I noticed after the scan, there's a lot of changeme script process that still open even the scan is already done (maybe a conflict with redis-thing).
So the processes that you see are the python subprocesses waiting for for work to do. The queues are built using ScanEngine._build_targets
and "poison pills" are added that terminates the subprocess. Something is happening that prevents the subprocesses from receiving the poison pills and exiting gracefully. Are you seeing any errors or timeout messages?
Also, I am looking into two design changes to changeme that should improve this situation. One is converting the target generation process to a generator pattern to reduce overhead and the second is moving the scanners to an event driven framework to get away from the process management hell.
Are you seeing any errors or timeout messages?
There is a lot of error message in my logs. Will send you the logs after the scan.
Looking forward in this tool.