haproxy icon indicating copy to clipboard operation
haproxy copied to clipboard

Option to control when to send HTTP2 windows updates

Open slux78 opened this issue 2 years ago • 4 comments
trafficstars

Your Feature Request

When a client sends an HTTP2 request message to haproxy, a windows update message is sent back to client. This could result in too many windows update messages, so it would be nice to have a configuration to control the sending of messages based on conditions. For example, it would be nice to have an option to send them when the buffer size is less than half, or when it is about 25%, and so on.

What are you trying to do?

I've been using haproxy since version 2.1.4 on my system for about 3 years , and now version 2.4.23, and I've been getting VOCs from clients saying that they are getting too many HTTP2 windows update messages from our system, and that it needs to be improved. So I want to reduce the number of messages sent from the haproxy to peer system(s) using haproxy configuration option.

Output of haproxy -vv

$ ./haproxy -vv
HAProxy version 2.4.23-62cb999 2023/06/09 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-2.4.23.html
Running on: Linux 5.15.0-76-generic #83~20.04.1-Ubuntu SMP Wed Jun 21 20:23:31 UTC 2023 x86_64
Build options :
  TARGET  = linux-glibc
  CPU     = generic
  CC      = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1
  DEBUG   = 

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H -DEVICEATLAS +DL +EPOLL -EVPORTS +FUTEX +GETADDRINFO -KQUEUE +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY +LUA -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OT +PCRE -PCRE2 -PCRE2_JIT -PCRE_JIT +POLL +PRCTL -PRIVATE_CACHE -PROCCTL -PROMEX -PTHREAD_PSHARED -QUIC +RT -SLZ -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL +ZLIB

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1u  30 May 2023
Running on OpenSSL version : OpenSSL 1.1.1u  30 May 2023
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with network namespace support.
Built with zlib version : 1.2.7
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.42 2018-03-20
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 4.8.5 20150623 (Red Hat 4.8.5-36)

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTTP       side=FE|BE     mux=H2       flags=HTX|CLEAN_ABRT|HOL_RISK|NO_UPG
            fcgi : mode=HTTP       side=BE        mux=FCGI     flags=HTX|HOL_RISK|NO_UPG
       <default> : mode=HTTP       side=FE|BE     mux=H1       flags=HTX
              h1 : mode=HTTP       side=FE|BE     mux=H1       flags=HTX|NO_UPG
       <default> : mode=TCP        side=FE|BE     mux=PASS     flags=
            none : mode=TCP        side=FE|BE     mux=PASS     flags=NO_UPG

Available services : none

Available filters :
	[SPOE] spoe
	[CACHE] cache
	[FCGI] fcgi-app
	[COMP] compression
	[TRACE] trace

slux78 avatar Aug 09 '23 22:08 slux78

Please can you show us such a windows update messages from our system?
What is a VOC?
Please can you also share your config?

git001 avatar Aug 10 '23 01:08 git001

Thank you for the answer. Please check followings:

  • Following is a brief description of the message flow between client and server (server is haproxy):
    client -> server   Magic, SETTINGS[0]  
    server -> client   SETTINGS[0], SETTINGS[0]  
    client -> server   WINDOW_UPDATE[0]  
    server -> client   SETTINGS[0], PING[0]  
    client -> server   PING[0]  
    client -> server   HEADERS[1]: POST /path/to/data/messages, DATA[1], JavaScript Object Notation (application/json), MANAGE UE POLICY COMMAND
    server -> client   WINDOW_UPDATE[1], WINDOW_UPDATE[0]  
    server -> client   HEADERS[1]: 200 OK, DATA[1], JavaScript Object Notation (application/json)

After receving client HTTP2 request message, haproxy

  • VOC means voice of customer, so it's request messages from my customer.

  • haproxy configuration is

    global
       nbthread 4
       maxconn 30000
       stats socket /var/run/haproxy.sock mode 660 expose-fd listeners level admin
       stats timeout 2m
       tune.h2.initial-window-size 6553600

    defaults
       timeout connect 10s
       timeout client 567s
       timeout server 3600s
       option forwardfor
       mode http
       maxconn 30000

    frontend my-server
       default_backend my-server
       http-request set-tos 0xE0
       http-response set-tos 0xE0
       bind 172.20.29.85:80 proto h2
       bind 172.20.29.85:32486 proto h2

    backend my-server
       balance static-rr
       cookie MYSYSTEM indirect preserve nocache
       default-server check maxconn 1000
       server server1 192.168.84.10:80 proto h2 cookie COOKIE0 check inter 10s downinter 10s observe layer4 error-limit 10 on-error mark-down
       server server2 192.168.84.5:80 proto h2 cookie COOKIE1 check inter 10s downinter 10s observe layer4 error-limit 10 on-error mark-down

slux78 avatar Aug 10 '23 03:08 slux78

Window updates are really delicate. We're already trying to aggregate them as much as we can. If you send 100 streams at once, you'll get a single WU for the connection. The problem with not sending them after receiving something is that some clients are waiting for the window to be completely clean in order to send some data, so you can really not start to guess that this or that client will support not sending all of them. At first I wanted to send them only when it had been depleted below a certain threshold, but we've seen deadlocks.

Just out of curiosity, what makes your customer think they're getting too many ? Compared to what ? Is it just because they're watching their stats and counting each frame type ? I'm asking because I find this particularly surprising given how cheap these frames are and that in any case you'll need to send it for the stream.

There's one thing we could possibly do, which would be comparable to what we're already doing at the TCP level in H1. It would be to postpone the sending of the stream WU frame if the stream is already half-closed. That is, headers+ES have been received, indicating a GET (or small post). In this case we could emit the WU with the response to that stream. But it would have to be sent anyway, so the number of frames would be the same. It could simply possibly avoid a TCP segment back to the client after the request. But anyway we have to send the one for the connection to avoid the risk of deadlock with a client.

wtarreau avatar Aug 12 '23 17:08 wtarreau

Thank you for your kind and detailed explanation.

The VOC of too many Windows updates is based on frame count in stats, which are high compared to systems using the nghttp2 stack. My customer has currently the same kind of systems as mine, and they are using the nghttp2 stack.

slux78 avatar Aug 14 '23 02:08 slux78