tempesta icon indicating copy to clipboard operation
tempesta copied to clipboard

HTTTP/2 (D)DoS prevention

Open krizhanovsky opened this issue 6 years ago • 9 comments

HTTTP/2 flood prevention

Low-Rate Denial-of-Service Attacks against HTTP/2 Services paper studies HTTP/2 floods using PING and WINDOW_UPDATE frames. It's not good to fight against the attacks using packet rate limit since a client can send large POST requests for example. From the other hand requests rate in such attacks could be zero. So a new limits with burst, just the same as for current request limiting, must be developed.

It makes sense to apply the limit to service frames only since HEADERS and DATA obey existing HTTP request limits. We can limit all the service frames at once, there is no need to limit them separately.

HTTTP/2 slow read

Slow Read Attack (CVE-2016-1546) uses many concurrent streams with little window updates, so

  1. The HTTP limits must be extended by stream_rate, stream_burst and ~concurrent_streams~ just similar to equal connection limits.

  2. response_timeout limit must be introduced for both the HTTP/1 and HTTP/2. The limit specifies how long we can be sending a given response to a client. To not to rate limit valid Comet responses one must specify the limit for vhosts other than servicing Comet resources.

HPACK bomb

~Huffman decoding is an expensive operation and we must check http_field_len Frang limit before Huffman decoding execution (see also HPACK bomb). Considering implementation variants - it seems that the simplest and most effective way is to embed a hook into HTTP/2-parser (or into HPACK-decoder) directly to catch the long headers immediately.~

~Also see corresponding TODO comment for __frang_http_field_len() in http_limits.c - we should do the same for HTTP/1.~

~We have separate issue for this https://github.com/tempesta-tech/tempesta/issues/1780~

Header list size limit

~HTTP limiting module provides important for HTTP/2 security limit http_header_cnt. According to RFC 7540 chapters 10.5.1 and 6.5.2 having the limit set, Tempesta should send SETTINGS_MAX_HEADER_LIST_SIZE settings parameter to a client.~

Testing

  • [ ] HTTP/2 streams rate limit to prevent HTTP/2 slow read. See #88
  • [ ] HTTP/2 frames rate limit to prevent HTTP/2 floods. See #88

Test to reproduce for concurrent streams.

Documentation

Please update https://github.com/tempesta-tech/tempesta/wiki/HTTP-security#frang-security-limits-enforcing-module with the reference to the paper.

krizhanovsky avatar Aug 29 '19 19:08 krizhanovsky

The HTTP limits must be extended by stream_rate, stream_burst and concurrent_streams just similar to equal connection limits.

Tempesta should send SETTINGS_MAX_CONCURRENT_STREAMS in SETTINGS frame when initializing connection with client.

Please enable http2_general.test_h2_streams.TestH2Stream.test_max_concurrent_stream test after fixes

RomanBelozerov avatar Jan 24 '23 12:01 RomanBelozerov

During developing this task we should don't forget to change

if (closed_streams->num <= TFW_MAX_CLOSED_STREAMS
    || (max_streams == ctx->streams_num
	&& closed_streams->num))
``
in `tfw_h2_closed_streams_shrink`

EvgeniiMekhanik avatar Jun 28 '23 08:06 EvgeniiMekhanik

Related CVE and security advisory https://github.com/advisories/GHSA-93p3-5r25-4p75

krizhanovsky avatar Apr 04 '24 22:04 krizhanovsky

  1. First of all we must not implement stream_rate, stream_burst, we have request_rate/burst that works the same.
  2. response_timeout it's very doubtful. It can help us only if limit whole connection lifetime, without any updates during connection lifetime. For example: Slow read client opens connection with 100 streams, stream 1 sends request, Tempesta scheduled it to upstream. Start response_timeout, if response timeout exceeded, reset connection. Cons of this method:
    1. We always must know how many time will take transferring of largest file with slowest client.
    2. It's per-connection not per-stream.

However, it seems in current task we must implement rate limits for SETTINGS, PING and maybe WINDOW_UPDATE frame. But let's start from SETTINGS and PING. WINDOW_UPDATE requires some discussion, because it's not so easy and maybe it not worth it. For instance, client can send unlimited number of WINDOW_UPDATE frames to closed streams, that also not good.

const-t avatar Apr 16 '24 13:04 const-t

After some discussion, we came to the conclusion that we need one timer per connection and bitfield that indicates stream stream being read in the timer interval. On every shot timer clears bitfield. However this doesn't look like complete solution.

const-t avatar Apr 17 '24 15:04 const-t

For "slow read": The slow-read attack affects the CPU, memory, and incoming connection pool. So #498 does not cover the issue.

A simple solution:

  1. define a frang limit: client_read_timeout <timeout> <min_rate>
  2. HEADERS/CONTINUATION/DATA frames sending will count the sending bytes of the current stream
  3. reuse the keep-alive timer, and set it as 1 second resolution
  4. in the timer callback, we iterate all streams and update their rates, if the rate of any stream is lower than the <min_rate>, we increase the duration, and if the accumulated duration is over <timeout>, we abort the connection and stop the iteration.
  5. works for any response slow-read cases (e.g. tcp-level slow-read, small sockbuf) and HTTP versions.

It works similarly to apache httpd, but here we aim at the response direction and use and condition to combine the <timeout> and <min_rate>.

kingluo avatar Apr 17 '24 15:04 kingluo

For "HTTTP/2 flood prevention": I doubt the low-rate service frames like PING can form the DOS, so maybe we just focus on the high-rate case? That is the PING flood, as well as SETTING (as suggested by @const-t, we can postpone handling WINDOW_UPDATE).

~A simple solution: We add timing history for service frames (so far, only PING and SETTING) just like HEADERS and enforce it in frang.~

kingluo avatar Apr 17 '24 15:04 kingluo

In this issue must be implemented fixes for following CVEs: CVE-2019-9511 “Data Dribble” CVE-2019-9517 “Internal Data Buffering” CVE-2019-9512 “Ping Flood” CVE-2019-9515 “Settings Flood”

const-t avatar Apr 18 '24 16:04 const-t