TheTechromancer
TheTechromancer
The following URLs did pass post-check, so they were processed by gowitness. It's unclear why there were no screenshots for them: ``` 2024-04-16 17:35:28,511 [DEBUG] bbot.modules.gowitness base.py:1214 URL("https://www.myaccounting.it/", module=httpx, tags={'status-200',...
You need to pipe the urls into it.
Seems to be another issue with the proxy. It might be worth trying a basic `curl` to verify a basic web request works through the proxy.
Ah okay. Apparently the issue is that chromium doesn't support socks5 auth: https://github.com/puppeteer/puppeteer/issues/1074
I'm hoping this will get solved when we replace gowitness with [playwright](https://github.com/microsoft/playwright).
Note: right now there's an exception for redirects in excavate. We need to make sure to rip this out if we implement this change.
Is `spider-danger` a decision that should be made by individual modules on a case-by-base basis? Or should we try and centralize it? 🤔
Actually I think the right thing to do here is to switch where we're incrementing `web_spider_distance` and where we're tagging `spider-danger`. Right now, we're incrementing `web_spider_distance` in every `URL_UNVERIFIED` and...
Thanks for reporting. Out of curiosity have you created a .netrc file? I think the error message is suggesting you need to do this to it: ```bash chmod 700 ~/.netrc...
Having a .netrc file is uncommon and not necessary to use httpx or BBOT. If you haven't created the file, it would seem this is a bug in the httpx...