Lukáš Křivka
Lukáš Křivka
I support the idea of having this in the CLI. As a reference for other users that might need this on the Apify platform, you can do it with this...
More people are asking about this, let's put it to backlog.
I can see a potential issue if the client will lock the requests and then migration will clear its memory. It will then fetch the next batch but there might...
I missed the part that the crawler will pause & unlock on migration. That solves the problem
I created this actor that works both with Puppeteer and Cheerio - https://apify.com/lukaskrivka/keywords-extractor. But for Cheerio current implementation it requires [jsdom](https://www.npmjs.com/package/jsdom) library which could be too much baggage for the...
I somehow overlooked that, that would work.
What do you mean by typical code? I mean that it will just give you a valid `$` object into the `handlePageFunction`. Like you can use in BasicCrawler now: ```...
Let's keep this open for now.
I think my run qualifies for the ```the load is very high all the time, but not extremely high```
We already wanted to have a generic `saveSnapshot` function as it is in Web Scraper and so. So if this would not require additional dependencies, I would put it in.