warcworker icon indicating copy to clipboard operation
warcworker copied to clipboard

Only pulled one page

Open tripleo1 opened this issue 5 years ago • 6 comments
trafficstars

  • How do I pull an entire website with this
  • How do I see what it is doing internally?

tripleo1 avatar Jul 19 '20 13:07 tripleo1

  1. Warcworker is for single page archiving only right now - typically for single posts on SPA websites (social media). There is no crawler or indexer. There are better tools if you want to archive a regular website including crawling.
  2. If you want to monitor logs run docker-compose logs --tail=100 -t -f

peterk avatar Jul 19 '20 13:07 peterk

Thats an awful lot of work for just one page.

On Sun, Jul 19, 2020 at 9:25 AM Peter Krantz [email protected] wrote:

  1. Warcworker is for single page archiving only right now. There is no crawler or indexer.
  2. If you want to monitor logs run docker-compose logs --tail=100 -t -f

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/peterk/warcworker/issues/12#issuecomment-660643439, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL73FMY2ZN3PACFBZBA4SALR4LX6BANCNFSM4PBKJ6DA .

tripleo1 avatar Jul 19 '20 19:07 tripleo1

:-) Well, I use it for mass archiving of URLs collected by a custom crawler from js heavy websites. It does its job. For regular archiving see Heritrix. Sorry if it doesn't match your use case. I have updated the README to clarify this for other potential users.

peterk avatar Jul 19 '20 20:07 peterk

You could check out the archiving component of warcworker - Squidwarc - it has settings that may help you in archiving more links of a website (see Page + Same Domain Links setting).

peterk avatar Jul 19 '20 20:07 peterk

Thanks. Was just looking at that.

On Sun, Jul 19, 2020 at 4:47 PM Peter Krantz [email protected] wrote:

You could check out the archiving component of warcworker - Squidwarc https://github.com/N0taN3rd/Squidwarc - it has settings that may help you in archiving more links of a website (see Page + Same Domain Links setting).

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/peterk/warcworker/issues/12#issuecomment-660706602, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL73FM7YNTRXNDUZOQEBM23R4NLVRANCNFSM4PBKJ6DA .

tripleo1 avatar Jul 20 '20 03:07 tripleo1

If I remember correctly it only captures the current page and all the links from that page so it will not capture an entire website. If the website you are archiving is not dependant on running scripts in the archiving tool you could check out HTTRack as well.

peterk avatar Jul 20 '20 15:07 peterk

Closing

peterk avatar Jul 09 '24 08:07 peterk