Adrián Chaves
Adrián Chaves
OK, so I think we need to change `download` so that: * `spider = self.spider` never happens. * `if spider is None` changes to `if self.spider is None`. Hopefully that...
Looks good to me, thanks! :rocket: How do you feel about test coverage? Do you think you can add at least a test to verify that the exception is not...
Could you provide a complete example, spider middleware and log output included? (you can also target toscrape.com for testing purposes)
I see. Because the `result` that `process_spider_output` is that generator, `process_spider_output` can run code before iterating `result`, and until result is iterated, `parse` does nothing. ([your example with additional prints](https://replit.com/@Gallaecio/Scrapy-5548?v=1))...
I was concerned about a scenario where such a chance would break stuff for someone with Storage Admin role but no Storage Object Admin role, but looking at https://cloud.google.com/storage/docs/access-control/iam-roles it...
I think it may be worth it failing more gracefully on partial functions.
I suggest we implement a function in the tests that is meant to (eventually) check for a perfect match (i.e. only expected warnings, warning count checked), and have all warning...
Current solution based on https://github.com/scrapy/scrapy/pull/4314, and assuming the use of the default redirect downloader middleware and duplicate filter: ```python # settings.py from logging import getLogger from scrapy.downloadermiddlewares.redirect import RedirectMiddleware as...
Oh, this is actually as designed, but there is room for documentation (log message) improvement. When you use the failing syntax, you get a warning: > ScrapyDeprecationWarning: The -t command...
It is not explained in `scrapy crawl --help`, and https://docs.scrapy.org/en/latest/topics/commands.html is even more out of date :disappointed: