Andrey Rakhmatullin
Andrey Rakhmatullin
`test_close_during_start_iteration()` can fail with a Task error about `_start_request_processing()`
> 2025-05-08 08:30:49 [scrapy.core.engine] INFO: Spider closed (shutdown) > 2025-05-08 08:30:49 [asyncio] ERROR: Task was destroyed but it is pending! > task: As `ExecutionEngine._start_request_processing()` is not awaited explicitly, it should...
If we want better support for native asyncio, we need to somehow provide `async def` alternatives to such public APIs as `CrawlerProcess.crawl()`, `ExecutionEngine.download()` or `ExecutionEngine.stop()`. It doesn't seem possible right...
This will need refactoring of `FeedExport` first though (done in #7161). Related to #6705
We added support for async spider middlewares in 2.7 (In October 2022), and [mixing sync and async middlewares](https://docs.scrapy.org/en/latest/topics/coroutines.html#sync-async-spider-middleware) was intended to be temporary and eventually deprecated and removed, after (most?)...
Broke with the yesterday's Twisted 25.5.0 release: https://github.com/scrapy/scrapy/actions/runs/15520532617/job/43693378052 Looks like the `_pytest.outcomes.Skipped` exception is no longer bubbled but instead causes some different exception.
Scrapy currently uses the pytest test runner but uses twisted.trial for async test cases. We are calling `pytest.skip()` to skip tests and it worked until the Twisted 25.5.0 release, now...
ruff detects that `@lru_cache` used on `GenericTranslator.css_to_xpath()` and `HTMLTranslator.css_to_xpath()` (added in #109) is not a good idea: ["the global cache will retain a reference to the instance, preventing it from...