crawlee-python icon indicating copy to clipboard operation
crawlee-python copied to clipboard

Improve the deduplication of requests

Open vdusek opened this issue 1 year ago • 1 comments

Context

A while ago, Honza Javorek raised some good points regarding the deduplication process in the request queue (#190).

The first one:

Is it possible that Apify's request queue dedupes the requests only based on the URL? Because the POSTs all have the same URL, just different payload. Which should be very common - by definition of what POST is, or even in practical terms with all the GraphQL APIs around.

In response, we improved the unique key generation logic in the Python SDK (PR #193) to align with the TS Crawlee. This logic was lates copied to crawlee-python and can be found in crawlee/_utils/requests.py.

The second one:

Also wondering whether two identical requests with one different HTTP header should be considered same or different. Even with a simple GET request, I could make one with Accept-Language: cs, another with Accept-Language: en, and I can get two wildly different responses from the same server.

Currently, HTTP headers are not considered in the computation of unique keys. Additionally, we do not offer an option to explicitly bypass request deduplication, unlike the dont_filter option in Scrapy (docs).

Questions

  • Should we include HTTP headers in the unique_key (extended_unique_key) computation?
  • Should we implement a dont_filter feature?
  • Should use_extended_unique_key be set as the default behavior?

vdusek avatar Jun 10 '24 09:06 vdusek

Should we include HTTP headers in the unique_key (extended_unique_key) computation?

I think yes, you should do that. In addition to the Accept-Language mentioned. You can consider the situation when within the crawler work requests are executed from different authorized users. The only difference is the header containing the authorization token. There are other special cases when the header has a significant impact on the content of the response.

Should we implement a dont_filter feature?

Yes. For example, for cases where the server returns a 200 response status. But the response body contains data that an error occurred and this request should be executed again. If I see the current implementation correctly, this will not be possible without this option.

Mantisus avatar Jun 24 '24 21:06 Mantisus

Let's try to find a better name than dont_filter, we want a new option that will put a random value into unique_key.

B4nan avatar Sep 23 '24 13:09 B4nan

Thanks for the inputs, based on this and our internal discussions I opened #547 and #548 and closing this one.

vdusek avatar Sep 27 '24 17:09 vdusek