go-ftw icon indicating copy to clipboard operation
go-ftw copied to clipboard

Add facilities to better test versions

Open theseion opened this issue 9 months ago • 6 comments

I would rather change the way we deal with versions than simply changing the default. For most tests we expect them to work for any HTTP version and we should really test that that assumption is true. I propose to update go-ftw, and possibly the test schema, to run each test against multiple HTTP versions, unless a test defines a restriction. I also think we should transform the HTTP version restrictions into ranges, such that we can say:

  • run for HTTP/1.0 only
  • run for > HTTP/1.0
  • run for < HTTP/2
  • run for HTTP/1.0 - HTTP/1.1
  • ...

Originally posted by @theseion in https://github.com/coreruleset/coreruleset/issues/4043#issuecomment-2746106846

Updating from HTTP/1.0 to HTTP/1.1 is fine. However:

all upgraded tests will be run against 1.1 only we basically lose the information that all of those tests work for 1.0 the tests that actually target 1.0 could potentially also be valid for other versions but we have no way of declaring that, except for duplicating such tests the internet is running on HTTP/2.0 and HTTP/2 (quic) and we should run our tests against those protocols we can update all tests now but will have to do the same thing again for HTTP/2.0 at some point, with the same implications as above

theseion avatar Mar 24 '25 17:03 theseion

Response from @dune73:

Got you.

As of this writing we do not know if the tests work with HTTP/1.1 and HTTP/2 etc. Your proposal expands the coverage across various protocols. That sounds good.

Yet, I wonder if this is really necessary for every test. I see a lot of tests with application/x-www-form-urlencoded and no special HTTP headers or behavior. So I can't see how they might fail. And running these tests against multiple protocols is just a futile exercise that slows down testing.

theseion avatar Mar 24 '25 17:03 theseion

I don't agree. For one, protocol differences can be present anywhere, e.g., in the syntax description for the Content-Type header. Also, we cannot assume that identical protocol specifications use the same code, so the responses might be different (e.g., 400 vs. connection drop) or one of the implementations might have a bug that we might want to target (or exclude) explicitly. Running a test against multiple versions will tell us that our expectations hold for all the tested protocol versions, not just for a single one.

theseion avatar Mar 24 '25 18:03 theseion

First point

This would require a simple fallback mechanism. Otherwise, I can see such a change making our test suite borderline unusable for CRS integrations.

Second point

How does this affect the time taken? If we move to running ~98% of tests (the non-HTTP version specific tests) against each of:

  • HTTP/1.0
  • HTTP/1.1
  • HTTP/2
  • HTTP/3

does that quadruple the test run time?

Has this been tested?

Third point

If we enable the tests we have today for all HTTP versions 1.0+, how many break?

Which is to say, how much work would it require to make this change? I.e. 4 tests that would need reworking to support is a very different change to propose if it's, say, 50 broken tests.

Has this been tested?

Results from a PoC would be very helpful to give context to this proposal.

RedXanadu avatar Mar 26 '25 17:03 RedXanadu

This would require a simple fallback mechanism. Otherwise, I can see such a change making our test suite borderline unusable for CRS integrations.

We've done something similar with the status field before, where we changed it from a list to an int. I don't see it as a big issue. Except maybe when we talk about supported protocols and possibly runtime.

does that quadruple the test run time?

Has this been tested?

Very likely. No, haven't tested it yet. The current runtime on my machine for modsec2 / httpd is 28s. Quadrupling that doesn't sound too bad to me.

If we enable the tests we have today for all HTTP versions 1.0+, how many break?

Unknown, which is one of the reasons why I think this is important. Of course, I would only update the CI to use the updated logic once everything was validated.

Which is to say, how much work would it require to make this change? I.e. 4 tests that would need reworking to support is a very different change to propose if it's, say, 50 broken tests.

Has this been tested?

No idea. I suspect that most tests will work for both 1.0 and 1.1. I'm currently unclear about the infrastructure required in go-ftw that we would need to add for supporting HTTP/2.0 and 3.0 (quic).

What I probably would do is to separate test configuration and support in go-ftw. I.e., we'd be able to say: "this test should be run against 1.0, 1.1, 2.0" while go-ftw still only supports 1.0 and 1.1. go-ftw could simply pick the lowest version from the list and only run that, for example. That behaviour would be close to the current one.

Results from a PoC would be very helpful to give context to this proposal.

We can try to get some partial results. As I wrote above, infrastructure for HTTP/2.0 and 3.0 (quic) doesn't exist and probably wouldn't be created just for a PoC. However, we could implement support for those protocols before continuing with this issue and a PoC.

theseion avatar Mar 26 '25 19:03 theseion

TL; DR: I think this is not going to work.

Initially, my thoughts are: what are we testing here? If we are trying to test the support for different versions, then we don't need to perform all tests using all protocols. What we should expose is something in go-ftw (and the tests schema) that allows for testing using many protocols, but for one set of specific tests.

For example, we can create tests that do cross-usage of versions and see how web servers behave. We initiate an HTTP/1.1 request, but emit HTTP/2.0 in the request line. Which I don't know if it would give a result.

The main problem is that HTTP/2 and HTTP/3 are binary protocols, and we must rely first on the support from the libraries to establish a connection (and the server must also support it). So... if we use a base golang standard library, would it be possible to alter the HTTP protocol on the fly? What would be the benefits? What are we going to test there? I don't think it will add value to CRS, in the end.

To make the most of this, we would end up rewriting the http/2 and http/3 module to be able to "break it". Which looks like something James Kettle will do. Do we want/need this?

So:

  • do we want to test CRS against all http protos? No.
  • do we want to have some test using all http protos? Maybe.
  • do we want to have support in go-ftw got all http versions as a client? Yes.
  • do we want to be able to "break" the http client version by implementing a new client? I don't think so.

fzipi avatar Mar 27 '25 12:03 fzipi

For example, we can create tests that do cross-usage of versions and see how web servers behave. We initiate an HTTP/1.1 request, but emit HTTP/2.0 in the request line. Which I don't know if it would give a result.

Hadn't even thought of that 😅

do we want to test CRS against all http protos? No.

Most of the rules, IMO. Let's take some arbitrary rule:

# [ Unix shell expressions - Bash Tilde expansion ]
#
# Detects the following patterns which are common in Unix shell scripts
# and one-liners:
#
# ~+    $PWD
# ~-    $OLDPWD
# ~4    fourth directory entry on the stack from the top
# ~-2   second directory entry on the stack from the top
# ~+2   second directory entry on the stack from the bottom
#
# Reference - https://linuxsimply.com/bash-scripting-tutorial/expansion/tilde-expansion/
#
# Regular expression generated from regex-assembly/932270.ra.
# To update the regular expression run the following shell script
# (consult https://coreruleset.org/docs/development/regex_assembly/ for details):
#   crs-toolchain regex update 932270
#
SecRule REQUEST_COOKIES|!REQUEST_COOKIES:/__utm/|REQUEST_COOKIES_NAMES|ARGS_NAMES|ARGS|XML:/* "@rx ~(?:[\+\-](?:$|[\s\x0b0-9]+)|[0-9]+)" \
    "id:932270,\
    phase:2,\
    block,\
    capture,\
    t:none,t:cmdLine,\
    msg:'Remote Command Execution: Unix Shell Expression Found',\
    logdata:'Matched Data: %{TX.0} found within %{MATCHED_VAR_NAME}: %{MATCHED_VAR}',\
    tag:'application-multi',\
    tag:'language-shell',\
    tag:'platform-unix',\
    tag:'attack-rce',\
    tag:'paranoia-level/1',\
    tag:'OWASP_CRS',\
    tag:'capec/1000/152/248/88',\
    ver:'OWASP_CRS/4.13.0-dev',\
    severity:'CRITICAL',\
    setvar:'tx.rce_score=+%{tx.critical_anomaly_score}',\
    setvar:'tx.inbound_anomaly_score_pl1=+%{tx.critical_anomaly_score}'"

We currently claim that "CRS will protect your installation from bash tilde expansion attacks at PL 2 in httpd and nginx". The tests only target HTTP/1.1 though, so technically we can only claim that we provide the protection for HTTP/1.1.

Maybe I'm being pedantic here and we can assume that engines will get the same input, regardless of the protocol. However, as a software developer I know that there will be cases where this is not true.

theseion avatar Mar 27 '25 17:03 theseion