R Max Espinoza
R Max Espinoza
I haven't tried myself but something like what you posted should work. Did you get some problem?
You could make a shared cookie storage in redis by using a downloader middleware. Similar to this: https://github.com/scrapy/scrapy/blob/ebef6d7c6dd8922210db8a4a44f48fe27ee0cd16/scrapy/downloadermiddlewares/cookies.py#L27 but fetching and storing cookies in redis.
Hi, thank you for your input. A while ago I was thinking something similar but couldn't pursue the implementation. It would be great if you could go ahead with the...
As @nieweiming said, you can avoid the warning for your project by overriding [make_request_from_data](https://github.com/rmax/scrapy-redis/blob/master/src/scrapy_redis/spiders.py#L121) and instead of [calling make_requests_from_url](https://github.com/rmax/scrapy-redis/blob/master/src/scrapy_redis/spiders.py#L134) just create the `Request` object directly. That could be a good...
I think makes sense to add some configurable retrying behaviour for known cases.
How does your pipeline look like? Are you using non-blocking operations?
It doesn't. You would need to implement a priority queue in redis as backend.
@AYiXi basically: * going through PRs and see if they makes sense * approving them, merging them * cleaning code and preparing releases
@LuckyPigeon nice! Let's give it a try. I added you as collaborator so you can close issues/PRs as you see fit.
@LuckyPigeon please check again!