adstxtcrawler icon indicating copy to clipboard operation
adstxtcrawler copied to clipboard

Following too many Redirects Is Problematic

Open BrendanIAB opened this issue 7 years ago • 4 comments

Test case results in 4-5 redirects due to paywall. This misconfiguration should be handle elegantly, with a clear max on redirects, rather than failing to parse paywall text.

  • Consider update to adstxt_crawler.py line 126 to include "allow_redirects=False" in the request.get function. This fails to handle useful redirects (domain.com > www.domain.com / HTTP > HTTPS).
  • Consider update to adstxt_crawler.py line 129 to examine "r.history" for a maximum number of redirects (1-2). This requires additional code to understand r.history return.

BrendanIAB avatar Jul 11 '17 16:07 BrendanIAB

Per the spec, these changes make sense. In fact, based on my reading of the spec, only a single 301, 302, or 307 redirect should be allowed: a scheme change from http to https. Everything else must remain the same, otherwise you risk an invalid redirect. I would argue that allowing for redirecting domain.com to www.domain.com risks running afoul of section 5.5. If www.domain.com is indeed the canonical domain for the site being crawled, then www.domain.com will be the domain that is driving significant requests for advertising, so it should be the domain that the crawler is set to perform its request to.

mgriego avatar Jul 27 '17 15:07 mgriego

In my data set, we see that "www." is routinely stripped (for the domain presented in the domain field), so this change would, while ensuring greater literal spec compliance, result in a poorer practical outcome. Perhaps a quick revision to the spec to also permit "domain.com" -> "www.domain.com" redirects is the cleanest solution.

iantri avatar Jul 29 '17 01:07 iantri

I suggest we only allow two styles of redirect (made this same comment in the slack channel):

  1. Scheme modifications
  2. Redirects to a different domain within the same domain tree (ie hostname.domain.tld -> domain.tld and vice versa, or even hostname.subdomain.domain.tld -> domain.tld or subdomain.tld). As long as the redirect doesn't redirect outside of the domain tree of the ORIGINAL DOMAIN, it should be allowed. Keeping it checked against the original domain ensures that we don't redirect up the domain tree then back down. This seems like a good compromise.

The number of redirects allowed should probably be limited as well.

mgriego avatar Aug 10 '17 16:08 mgriego

We implemented the redirect limitation using requests.session() and it worked great. Need to consider adding timeout value as well and validation of page type (i.e. HTML error page with response code 200) , I encountered several sites that return huge HTML files that cause an exception.

ghost avatar Sep 27 '17 12:09 ghost