zimit icon indicating copy to clipboard operation
zimit copied to clipboard

How to perform incremental scrapping of websites ?

Open nish2482 opened this issue 1 year ago • 2 comments

I find that for scrapping a mediawiki site,zimit takes 5-6 hours , is there some recommended setting for scraping a mediawiki site for zimit? To reduce scrapping time how can we do delta scrapping using zimit so that we just scrap the changed web pages and add it to the original zim ?

nish2482 avatar Jul 19 '24 12:07 nish2482

Problem with mediawikis is that all revision pages are grabbed one by one. This is probably not something you're interested in, you can probably set an exclude parameter to exclude revision history URLs (never tried, but should work). It is also important to note that in order to ZIM a mediawiki it is preferable to use mwoflliner scraper which is specifically tailored to ZIM a mediawiki.

That being said, the problem of incrementally scrapping a site is still relevant for many other cases, and for now there is no real solution in place. And it is probably not going to be something straightforward to implement.

benoit74 avatar Jul 19 '24 13:07 benoit74

Scraping of Mefiawiki sites is recommended to be done via MWoffliner.

kelson42 avatar Jul 19 '24 13:07 kelson42