Feature Request: Ignore Parameter Values in Crawler
Please describe your feature request:
I would like to request a feature for the Katana crawler that allows users to ignore the values of URL parameters during the crawling process. Currently, Katana crawls all variations of a URL, including those with different parameter values, which can lead to excessive crawling of fundamentally similar pages. For instance, the URLs "http://example.com?param1=1¶m2=2" and "http://example.com?param1=2¶m2=1" may lead to nearly identical content, yet they are treated as completely distinct pages by the crawler.
Describe the use case of this feature:
The primary motivation for this feature is to optimize the crawling efficiency of Katana. By ignoring the specific values of parameters, users can reduce the number of redundant requests made during a crawl. This would not only improve the crawling speed but also minimize the load on the target server, helping to avoid potential rate limiting or being flagged for excessive requests.
In practice, this feature could be particularly beneficial for users who work with large websites that have numerous parameters appended to their URLs, enabling a more streamlined and effective crawling process. It would help ensure that Katana focuses on the structural aspects of the site rather than getting caught in unnecessary loops due to value variations in query strings.
Thank you for considering this feature request to enhance the capabilities of the Katana crawler.
Additionally: It may be beneficial to allow users to choose which parameters to ignore, potentially by passing them as a list.
Thanks so much for your feature request @Rand0x , we'll take a look into this!
@Rand0x, thank you for your feature request. Have you explored the -ignore-query-params option?
@dogancanbakir, thank you for your answer. Yes, I already explored -iqp
As you can see in the image, I have a website that handles the CSRF token via the newtoken parameter. Therefore, it would be very helpful if I could say, "Ignore the newtoken parameter, even if it changes." With the -iqp option, only 3 pages are crawled, but there are many more. Katana also proves this. However, it gets caught in an endless loop because the parameter changes with each request.
@Rand0x So, are you looking for an option to skip or ignore, for example, URLs that include the newtoken query parameter?
@Rand0x -crawl-out-scope option supports regex to define out of scope items, you can use this to exclude the pattern you wanted to exclude in any part of URL including query parameters.
-cos, -crawl-out-scope string[] out of scope url regex to be excluded by crawler
@Rand0x So, are you looking for an option to skip or ignore, for example, URLs that include the
newtokenquery parameter?
Yes, correct.
@Rand0x
-crawl-out-scopeoption supports regex to define out of scope items, you can use this to exclude the pattern you wanted to exclude in any part of URL including query parameters.-cos, -crawl-out-scope string[] out of scope url regex to be excluded by crawler
I do not want to exclude links which contain the newtoken parameter. I want to exclude duplicates.
example.com?file=abc.pdf&newtoken=12312312
example.com?file=deaf.pdf&newtoken=44332145&user=1
2 different sites, but every site links the other site, with a new value of the parameter newtoken
@Rand0x Thank you for providing all the details. However, I believe this is a very specific use case. To implement this, we need to develop a more generalized version. I'm currently unable to think of one. I’ll close this for now, but if you have any other ideas, please let us know.