gzip problem?
i'm trying to use this and it goes well but suddenly this thing appears and i just cant do anything, the command is this waybackup -u roguestatus.com/ -a --filetype html,txt -o waybackup --retry 5
hey :) do you have an error log with the full traceback?
and also it just downloads robots.txt and not actually all the snapshots of the website and its other things i think its called paths or just every single url thats this url/*
give me some time. im not at home right now :)
oh its fine!
give me some time. im not at home right now :)
so i tried your last command. just a small hint:
you don't need to give a wildcard (*) to waybackup. just write:
waybackup -u roguestatus.com -l --filetype html,txt -o waybackup --retry 5
it will download everything from your specified subdir anyway.
however i could replicate the gzip exception. it seems the file does not start with the gzip-header and is in fact not compressed. these files are now downloaded raw.
And one last thing, It worked but how do i only download the text/content of the website only, i want to do it because it downloads too much useless stuff and downloading html only works with forums IIRC, Thanks.
and do you have discord so i can talk to you more?
And one last thing, It worked but how do i only download the text/content of the website only, i want to do it because it downloads too much useless stuff and downloading html only works with forums IIRC, Thanks.
what do you mean exactly ? your command downloads html and txt just fine :) send me a mail (check my profile)