TheTechRobo
TheTechRobo
Not even the kill-wpull-connections worked in my recent case of this; there was no wpull process running on my system at all. As the wpull.db wasn't corrupt, I was able...
Yeah, that's a known issue. IIRC grab-site doesn't extract links from JavaScript, so they won't be saved. The JS itself will be saved as it is a page requisite but...
I mean a proxy that would parse and/or run JvaaScript (and then add the links to the finished html or put the links in a text file that can be...
Just realised - adding links to the HTML is a no-go, since we probably want clean archives. But just a textfile with urls would probably be fine. :smile_cat:
How do I add custom hooks now??
I agree with most of what you said, but I don't like this: > no major (breaking) refactors (requiring end-user changes), IMO while we should avoid them, there will be...
> What do you think would be the best way to go about discovering and including those issues in creating a more concrete plan for this project? Probably create a...
Is there a way to resume the process nonetheless? I doubt cookies would matter for the things I'm crawling. They're just large websites.
I would like to pick this up (even though the cookie problem - we can just have a warning) How difficult would this be to implement @ivan?
Why is the name parameter hardcoded to PNphpBB2?