Link fuzzer reports with actual vulnerabilities
cats is good when cherry-picking a fuzzer and running it on one endpoint.
However, today I'm scanning an API for a customer, ran on all endpoints using:
cats -c open_api.yml -s https://<API_URL> --proxyHost 127.0.0.1 --proxyPort 8080 -H "Authorization=Basic <TOKEN>" --checkHeaders --refData refs.yml --iu
I'm still getting more than 3k errors, which makes it difficult to identify what to look at in priority. Some of them are timeouts due to the app not handling as many requests, others don't mean much (for instance, ExtraHeaders fuzzer when the app doesn't even process them will result in errors, but they don't mean anything - but there are hundreds of examples like this).
The way I workaround this at the moment is to run one fuzzer at a time, but this defeats a bit the purpose of running cats somehow (ideally we want to do a full run, then pickle on the vulns we're interested in, and then re-run with a different set of inputs).
Proposal for improvements:
-
It would be helpful to match different type of fuzzers with known vulnerabilities, attack types, or just an explanation of how it could be used to exploit, so that we could sort them in the UI and prioritize some of them.
-
It would also be helpful if
catscould help us ignore some errors, for instance if fuzzing theAcceptheader result in the wrong expected error code, but that code is the same no matter what theAcceptheader is: it could be considered that the app ignores it altogether, meaning a 'normal behavior' -
Have a way to have fuzzer info in the requests with the fuzzer that made them, for instance by tweaking the User-Agent header: could be something like
cats/<version> (<FUZZER_NAME> <EXPECTED_CODE>). This could allow for instance linkingcatswith Burp proxy and doing the analysis by simply looking at the info in the request to link it back to the actual test. -
Have a mode where we can make the "good" request (without tampering) to check if the code is "good" (i.e expected in the conf). There are a lot of times where even the normal request will fail (wrong or bad data replaces, wrong authorization header ...), so it would be nice to be able to detect it and flag it somehow instead of making all fuzzed requests output failures.
hi @ocervell. Some of the things you mention can already be done. Some examples:
- you can control the number of requests per minute using the
--maxRequestsPerMinuteargument; this will avoid the timeouts you mention - rather than running each fuzzer individually, you can exclude the fuzzers which are not relevant for you using the
--skipFuzzersargument; you can provide a comma separated list of fuzzers to be excluded - you can ignore specific response codes, response regexes from the body, etc. using the
--ignoreXXXarguments; this will allow you to ignore specific errors returned by the service, or specific response codes. Ignoring will mean they will be reported as success and included in the report, but you can skip reporting for them using--sri - I wouldn's say that because an app returns the same result for all
Acceptheaders should mean that it's fine; maybe the app behaves equally badly for allAcceptheaders, so I would rely on the--ingoreXXXmentioned above - the "good" requests are done through the
HappyFuzzer; if the requests will need additional context (like some entities needed to be created) you can supply static/reference data using the--refDataargument; this is a great way to provided additional context to make some fields static in order for the requests to meet business constraints
It's a good idea to extend the User-Agent header with additional context. I'll add that to the backlog.
I would typically recommend a first round with all fuzzers in blackbox mode: cats ... -b -k which will only report 500. It just needs the contract and authorization headers.
After, you can play with the --ingoreXXX arguments --matchXXX arguments and different filtering arguments.
The User-Agent header is enhanced in the latest release: https://github.com/Endava/cats/releases/tag/cats-11.3.0