For some reason the service can't find any links apart from dead ones, tried for both examples from the ReadMe, how to solve that?
I noticed this lately too. I'm currently not provided with the right setup to fix this at the moment. An alternative I know that works is have tor.service running, but also the tor browser open and running on the side - it worked for me when I first tested this bug. Hope this helps.
I ran into the same problem where every .onion link is marked as "Dead onion", even with Tor running correctly.
Turns out it was caused by an exception in analyze_text() — the NLTK resource punkt_tab wasn't properly registered when using just nltk.download('punkt'). This caused the page parsing to silently fail.
I submitted a fix in PR #16 that explicitly downloads punkt_tab.
Tested it on Kali Linux + Tor proxy and confirmed it's working.
Hope this helps anyone facing the same issue!
Thanks @josh0xA for building this tool, it's really helpful!
I ran into the same problem where every
.onionlink is marked as "Dead onion", even with Tor running correctly.Turns out it was caused by an exception in
analyze_text()— the NLTK resourcepunkt_tabwasn't properly registered when using justnltk.download('punkt'). This caused the page parsing to silently fail.
I submitted a fix in PR #16 that explicitly downloads
punkt_tab. Tested it on Kali Linux + Tor proxy and confirmed it's working.Hope this helps anyone facing the same issue!
Thanks @josh0xA for building this tool, it's really helpful!
thanks, works for me now!
Can you please share how you did extract scraped data from the found websites? commands from readme or --help give just metadata, emails if found and amount of links.