Samuil Petrov
Samuil Petrov
It would be super useful to also add the feature of feeding more context to the spiders. Not just a list of start_urls, but a list of json like so:...
That's exactly what I needed. Thanks a lot!
I'm trying to reach 1500 requests/min but it seems like using a single spider might not be the best. I noticed that scrapy-redis reads urls from redis in batches equal...
I think there's an issue at line 184 in pretty_html_table.py: ``` python int(repr(line).split('>')[1].split('