Esteban Blanc
Esteban Blanc
SuckIT does not scrape CSS and javascript files, so any link to other sources is not followed and thus downloaded
> Self hosted server to avoid network delays Fixed by #108
I need to give a proper look at your comment tomorrow, but I think that our biggest concern was about the different hashtable and the waiting queue of the scrapper....
There are lot of issues right now that needs to be fixed before tackling this one. I will notify you when I will start working on this, but I can't...
We could have one 404 error page by website
A good solution can be to hash a 404 or 200 webpage. This way if the page is specific to this URL it is saved, if not we could make...
Humm ok. We have more serious issues and very little time currently, we will give this a try latter
I don't remember what is wrong here :sweat_smile:
Ok. Could you provide how you invoked SuckIT? If possible, tune the URL in order for us to get the error the fastest way possible. > I have been transitioning...
> destination will always be a directory, right? No, a file