scrapyrt icon indicating copy to clipboard operation
scrapyrt copied to clipboard

Scrapyrt scrape multiple spiders asynchronously at once instead of overwhelming the server with request

Open xaander1 opened this issue 3 years ago • 2 comments

@pawelmhm requesting ability to scrape multiple spiders asynchronously at once instead of overwhelming the server with request Here is what i mean:


{
    "request": {
        "url":["https://www.site1.com","https://www.site2.com","https://www.site3.com"] ,
        "callback": "parse_product",
        "dont_filter": "True"
    },
    "spider_name": ["Site1","Site2","Site3"]
}

Enabling the ability to scrape multiple spiders at once in real-time.

The alternative would be to write an api utilizing requests that programatically sends these requests one by one asynchronously then combine the results which i feel is a little bit unneat and resource intensive...built in support would be nice.

xaander1 avatar Jun 16 '21 12:06 xaander1

It sounds interesting, I think some sort of batch processing would be good here, in your example it will be difficult to know which spider should crawl which url, but maybe we could support something like this

{ "request": [
    {"url": "http://example1", "spider": "spider1"}, 
   {"url2": "http://example2", "spider": "spider2"}
]

so essential request as a list, but we'd have to think how to do it, changes would have to be made in: CrawlManager and CrawlResources.

pawelmhm avatar Sep 22 '21 13:09 pawelmhm

@pawelmhm How long for this to be implemented?

xaander1 avatar Jan 20 '22 16:01 xaander1