haproxy icon indicating copy to clipboard operation
haproxy copied to clipboard

Sticky session tracking does not work on the old worker after the configuration reload

Open atitow opened this issue 4 years ago • 33 comments

Detailed description of the problem

We use HAProxy in master-worker mode as a reverse proxy to implement sticky sessions based on URL-parameters. We configured peers to transfer stick-table to the new worker after the configuration reload.

After the configuration reload sticky session tracking does not work for the old worker.

If the client established a TCP connection to the worker before the configuration reloading, the HTTP requests after the reload will reuse the same TCP connection. But the old worker does not forward the request to the proper server.

Expected behavior

The old worker should have a sticky table as long as there are open TCP-connections to this worker.

Steps to reproduce the behavior

  1. Start HAProxy with:
/usr/sbin/haproxy -x /run/haproxy/admin.sock -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
  1. Bind socket on some other host to simulate "keep-alive" behavior:
socat TCP:<haproxy>:80 UNIX-LISTEN:/tmp/curl.sock,fork
  1. Execute the following curl 3 times. All 3 requests will be forwarded to the same server (tomcat4):
curl --unix-socket /tmp/curl.sock -v 'http://<haproxy>/online/documentList.xhtml;jsessionid=ABCD'
  1. Trigger configuration reload on HAProxy:
kill -SIGUSR2 $(cat /var/run/haproxy.pid)
  1. Execute the same curl again. HTTP-Request is processed by the old worker and forwarded to the wrong server (tomcat1):
curl --unix-socket /tmp/curl.sock -v 'http://<haproxy>/online/documentList.xhtml;jsessionid=ABCD'
  1. Execute the same curl without referencing the socket. HTTP-Request is processed by the new worker and forwarded to the right server (tomcat4): curl --unix-socket /tmp/curl.sock -v 'http:///online/documentList.xhtml;jsessionid=ABCD'

After the reload HTTP requests are forwarded to the wrong server:

Jan 25 19:57:13 ip-10-0-128-235 haproxy[5677]: 10.0.0.150:38006 [25/Jan/2021:19:57:13.972] digi-signer tomcat/tomcat4 0/0/1/1/2 302 223 - - ---- 2/2/0/0/0 0/0 "GET /online/documentList.xhtml;jsessionid=ABCD HTTP/1.1"
Jan 25 19:57:24 ip-10-0-128-235 haproxy[5677]: 10.0.0.150:38006 [25/Jan/2021:19:57:24.609] digi-signer tomcat/tomcat4 0/0/0/1/1 302 223 - - ---- 2/2/0/0/0 0/0 "GET /online/documentList.xhtml;jsessionid=ABCD HTTP/1.1"
Jan 25 19:58:48 ip-10-0-128-235 haproxy[5677]: 10.0.0.150:38006 [25/Jan/2021:19:58:48.844] digi-signer tomcat/tomcat4 0/0/0/1/1 302 223 - - ---- 2/2/0/0/0 0/0 "GET /online/documentList.xhtml;jsessionid=ABCD HTTP/1.1"
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5460]: Proxy stats started.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Stopping proxy stats in 0 ms.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5460]: Proxy stats started.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5460]: Proxy digi-signer started.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Stopping proxy stats in 0 ms.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Stopping frontend digi-signer in 0 ms.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5460]: Proxy digi-signer started.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5460]: Proxy tomcat started.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Stopping frontend digi-signer in 0 ms.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Stopping backend tomcat in 0 ms.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Stopping backend tomcat in 0 ms.
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Proxy stats stopped (cumulated conns: FE: 0, BE: 0).
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Proxy stats stopped (cumulated conns: FE: 0, BE: 0).
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Proxy digi-signer stopped (cumulated conns: FE: 4, BE: 0).
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Proxy digi-signer stopped (cumulated conns: FE: 4, BE: 0).
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Proxy tomcat stopped (cumulated conns: FE: 0, BE: 7).
Jan 25 19:59:29 ip-10-0-128-235 haproxy[5677]: Proxy tomcat stopped (cumulated conns: FE: 0, BE: 7).
Jan 25 19:59:33 ip-10-0-128-235 haproxy[5677]: 10.0.0.150:38006 [25/Jan/2021:19:59:33.021] digi-signer tomcat/tomcat1 0/0/0/0/0 302 223 - - ---- 2/2/0/0/0 0/0 "GET /online/documentList.xhtml;jsessionid=ABCD HTTP/1.1"
Jan 25 20:00:46 ip-10-0-128-235 haproxy[5687]: 10.0.0.150:38024 [25/Jan/2021:20:00:46.945] digi-signer tomcat/tomcat4 0/0/1/0/1 302 223 - - ---- 1/1/0/0/0 0/0 "GET /online/documentList.xhtml;jsessionid=ABCD HTTP/1.1"

Do you have any idea what may have caused this?

Our assumption is, that the stick-table on the old worker gets reset after the transfer to the new worker (we checked it with Master CLI).

Do you have an idea how to solve the issue?

The possible workaround is to disable keep-alive mode (option http close).

What is your configuration?

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket  /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        nbproc 1

        # Default SSL material locations
        #ca-base /etc/ssl/certs
        #crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        #ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        #ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        #ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  300000
        timeout server  300000
        timeout http-keep-alive 300000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

peers mypeers
        bind :10000
        server "$HAPROXY_LOCALPEER"

listen stats
        bind *:8000
        stats enable
        stats refresh 30s
        stats show-node
        stats uri  /stats

frontend digi-signer
        bind *:80
        mode http
        default_backend tomcat

backend tomcat
        mode http
        stick-table type string len 32 size 5m expire 4h peers mypeers
        stick store-response set-cookie(JSESSIONID)
        stick on cookie(JSESSIONID)
        stick on urlp(jsessionid,;)
        balance roundrobin
        option httpchk GET /online/getStatus
        default-server inter 5s fall 2 rise 2 check
        server tomcat1 10.0.129.207:8080 id 129207
        server tomcat2 10.0.130.174:8080 id 131029
        server tomcat3 10.0.128.176:8080 id 128176
        server tomcat4 10.0.130.11:8080 id 130011

Output of haproxy -vv and uname -a

#haproxy -vv
HA-Proxy version 2.2.8-1ppa1~focal 2021/01/14 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2025.
Known bugs: http://www.haproxy.org/bugs/bugs-2.2.8.html
Running on: Linux 5.4.0-1035-aws #37-Ubuntu SMP Wed Jan 6 21:01:57 UTC 2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -O2 -fdebug-prefix-map=/build/haproxy-uhSsJ9/haproxy-2.2.8=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-stringop-overflow -Wno-cast-function-type -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1f  31 Mar 2020
Running on OpenSSL version : OpenSSL 1.1.1f  31 Mar 2020
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE2 version : 10.34 2019-11-21
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 9.3.0
Built with the Prometheus exporter as a service

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
            fcgi : mode=HTTP       side=BE        mux=FCGI
       <default> : mode=HTTP       side=FE|BE     mux=H1
              h2 : mode=HTTP       side=FE|BE     mux=H2
       <default> : mode=TCP        side=FE|BE     mux=PASS

Available services : prometheus-exporter
Available filters :
        [SPOE] spoe
        [COMP] compression
        [TRACE] trace
        [CACHE] cache
        [FCGI] fcgi-app
#uname -a
Linux ip-10-0-128-235 5.4.0-1035-aws #37-Ubuntu SMP Wed Jan 6 21:01:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Additional information (if helpful)

We run HAProxy on AWS. We were able to reproduce the same behavior on Amazon Linux with HAProxy 2.1.4:

#uname -a
Linux ip-10-0-128-92.ec2.internal 4.14.209-160.339.amzn2.x86_64 #1 SMP Wed Dec 16 22:44:04 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

atitow avatar Jan 25 '21 22:01 atitow

Please issue "show peers" on haproxy's stats socket, I suspect it's not synchronized. I'm not yet familiar with the new peers section syntax using bind+server, but I remember that with the old syntax using "peer" you'd have to append "local" on the line to explicitly indicate it was the local server. Maybe something is not working well enough here to detect that condition and prevents the stick-table from being properly transferred upon reload. Just a guess.

wtarreau avatar Feb 06 '21 10:02 wtarreau

Here is the result of "show peers" command:

7517> show peers
0x55937278e2b0: [08/Feb/2021:06:06:57] id=mypeers state=0 flags=0x3 resync_timeout=<PAST> task_calls=6
  0x559372790070: id=ip-10-0-128-235(local,inactive) addr=0.0.0.0:10000 last_status=NONE reconnect=<NEVER> confirm=0 tx_hbt=0 rx_hbt=0 no_hbt=0 new_conn=0 proto_err=0
        flags=0x0
        shared tables:
          0x55937279e4f0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
              last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
              table:0x5593727977d0 id=tomcat update=4 localupdate=4 commitupdate=0 syncing=0
        TX dictionary cache:
        RX dictionary cache:
            0 -> -    1 -> -    2 -> -    3 -> -
            4 -> -    5 -> -    6 -> -    7 -> -
            8 -> -    9 -> -   10 -> -   11 -> -
           12 -> -   13 -> -   14 -> -   15 -> -
           16 -> -   17 -> -   18 -> -   19 -> -
           20 -> -   21 -> -   22 -> -   23 -> -
           24 -> -   25 -> -   26 -> -   27 -> -
           28 -> -   29 -> -   30 -> -   31 -> -
           32 -> -   33 -> -   34 -> -   35 -> -
           36 -> -   37 -> -   38 -> -   39 -> -
           40 -> -   41 -> -   42 -> -   43 -> -
           44 -> -   45 -> -   46 -> -   47 -> -
           48 -> -   49 -> -   50 -> -   51 -> -
           52 -> -   53 -> -   54 -> -   55 -> -
           56 -> -   57 -> -   58 -> -   59 -> -
           60 -> -   61 -> -   62 -> -   63 -> -
           64 -> -   65 -> -   66 -> -   67 -> -
           68 -> -   69 -> -   70 -> -   71 -> -
           72 -> -   73 -> -   74 -> -   75 -> -
           76 -> -   77 -> -   78 -> -   79 -> -
           80 -> -   81 -> -   82 -> -   83 -> -
           84 -> -   85 -> -   86 -> -   87 -> -
           88 -> -   89 -> -   90 -> -   91 -> -
           92 -> -   93 -> -   94 -> -   95 -> -
           96 -> -   97 -> -   98 -> -   99 -> -
          100 -> -  101 -> -  102 -> -  103 -> -
          104 -> -  105 -> -  106 -> -  107 -> -
          108 -> -  109 -> -  110 -> -  111 -> -
          112 -> -  113 -> -  114 -> -  115 -> -
          116 -> -  117 -> -  118 -> -  119 -> -
          120 -> -  121 -> -  122 -> -  123 -> -
          124 -> -  125 -> -  126 -> -  127 -> -

We also tested the haproxy we the old syntax with the same negative result. As mentioned above, the stick-table was transferred to the new worker, but it has disappeared on the old worker.

atitow avatar Feb 08 '21 11:02 atitow

A only slightly related question: why don't we close idle HTTP sessions on the old worker gracefully as soon as we get the signal? GOAWAY in HTTP/2, ssl_shutdown in H1 SSL and half-close the TCP connection for plaintext HTTP?

This would make reloads much less painful overall. Currently, we have to brutally kill a session with hard-stop-after or mworker-max-reloads, which can interrupt transactions in flight. And those could be long-running connections that only periodically see transactions, which - at least in theory - could have been easily closed.

lukastribus avatar Feb 08 '21 11:02 lukastribus

The difficulty is to wake all of them up as we don't have such a list, the processing is too much distributed. I would love to have this ability and have thought about it a few times already, without figuring an acceptable solution.

wtarreau avatar Feb 24 '21 09:02 wtarreau

@atitow my understanding of your dump is that the peers are disconnected, maybe they didn't re-learn from the old process since I'm not seeing remote IDs. But I'm not fluent on this, I'm adding a peers tag so that someone who knows better can have a look.

wtarreau avatar Feb 24 '21 09:02 wtarreau

@wtarreau The new worker works perfectly fine after the reload: the stick-table is available and contains all records from the old worker. We have problems with the old worker after the synchronization. It looks like it loses the stick-table and is not able to forward requests to the proper backend server. We are forced to work with disabled keep-alive mode (option http close).

atitow avatar Feb 25 '21 10:02 atitow

OK it's clearer now, thanks. I didn't notice your huge keep-alive timeouts. During a reload, the old processes are not connected anymore with other peers, because there is one connection per peer and this connection is the one from the new process only. So the only stickiness that the old process will have is based on what it has in its table, but it will not learn anything new from the other nodes.

As a general rule, a process is not expected to remain active for a long time after a reload, and not to perform much load balancing anymore (basically serve what's still in progress and close). For example, health checks are disabled during a soft-stop operation, and stats are usually not collected, so you often don't want it to run for long.

What is the reason for using something as long as 5 minutes as the keep-alive timeout ? It's more than some users' entire system lifetime (these should probably not be presented as the best example, but it's to illustrate). And do you think that using shorter timeouts would better address your issue ?

wtarreau avatar Feb 25 '21 10:02 wtarreau

@wtarreau Thank you for the explanation. It is clear for us, that the old peers would not learn from the new pears. The problem is, that the old peers lose their existing (old) stick-table after the synchronization and cannot serve existing (old) connections properly. We just set the keep-alive timeout to 5 minutes to better illustrate and reproduce the problem. In the real-life, we have shorter timeouts, but the problem is still there: the connections to the old peers are not served properly after the synchronization.

atitow avatar Feb 25 '21 11:02 atitow

peers lose their existing (old) stick-table after the synchronization This should not happen, the only reason for stick-table entries to disappear is if they've reached their end of life or the table is full and has to recycle oldest entries. Given that your table's expiration is set to 4h and the keep-alive timeout to 5mn only, I hardly see how this could happen. Rest assured that this doesn't mean I don't believe you, just that I really can't imagine any single case which could produce this. @EmericBr do you know a situation where a table would vanish on an old process after synchronization ?

By the way I've thought about something which could work to try to tear down most idle connections on stopping. I need to think a bit more about it and maybe I'll be able to implement it for 2.4. The idea would be to have a per-thread "stopping list" to which muxes would subscribe once they turn idle to be notified in case it happens. A dedicated task would progressively run over the list and wake all entries up in case of stopping. I guess this would be sufficient.

I'm having a question regarding your config. You're using jsessionid in URL and cookie. Do you still have an application relying on this or is it just a legacy config ? I'm asking because we dropped appsessions 1 or 2 years ago because nobody needed it anymore given that such applications had disappeared and that all clients support cookies nowadays. Thus, if it's just that you inherited an old config, you could simply set a cookie directive on each server with a unique name, and add cookie MYCOOKIE insert indirect nocache. This will be enough and you won't need your stick-table anymore. You could also reuse the JSESSIONID cookie using cookie JSESSIOND prefix (this mode was designed exactly for this one).

wtarreau avatar Feb 25 '21 17:02 wtarreau

@wtarreau Thank you for the suggested workaround. Unfortunately, it would not work for us.

One of the main reasons for us to use HA Proxy is the very flexible configuration for the sticky session handling.

Normally we use cookies for the session tracking, but some of the customers integrate our application for electronic signatures into their HTML pages using Iframes. The cookies from the applications running in the Iframes (third-party cookies) are blocked in Safari-Browsers by default (https://www.theverge.com/2020/3/24/21192830/apple-safari-intelligent-tracking-privacy-full-third-party-cookie-blocking). So we have to use session ID in the URL if cookies are not supported.

atitow avatar Feb 26 '21 15:02 atitow

OK, got it, thanks for the background. It makes sense in this case, indeed.

wtarreau avatar Feb 26 '21 17:02 wtarreau

peers lose their existing (old) stick-table after the synchronization This should not happen, the only reason for stick-table entries to disappear is if they've reached their end of life or the table is full and has to recycle oldest entries. Given that your table's expiration is set to 4h and the keep-alive timeout to 5mn only, I hardly see how this could happen. Rest assured that this doesn't mean I don't believe you, just that I really can't imagine any single case which could produce this. @EmericBr do you know a situation where a table would vanish on an old process after synchronization ?

By the way I've thought about something which could work to try to tear down most idle connections on stopping. I need to think a bit more about it and maybe I'll be able to implement it for 2.4. The idea would be to have a per-thread "stopping list" to which muxes would subscribe once they turn idle to be notified in case it happens. A dedicated task would progressively run over the list and wake all entries up in case of stopping. I guess this would be sufficient.

I'm having a question regarding your config. You're using jsessionid in URL and cookie. Do you still have an application relying on this or is it just a legacy config ? I'm asking because we dropped appsessions 1 or 2 years ago because nobody needed it anymore given that such applications had disappeared and that all clients support cookies nowadays. Thus, if it's just that you inherited an old config, you could simply set a cookie directive on each server with a unique name, and add cookie MYCOOKIE insert indirect nocache. This will be enough and you won't need your stick-table anymore. You could also reuse the JSESSIONID cookie using cookie JSESSIOND prefix (this mode was designed exactly for this one).

@wtarreau I do not want to disturb you, but maybe you had a chance to think about tearing down the idle connections on stopping?

atitow avatar Apr 07 '21 10:04 atitow

Indeed, since 2.5 the idle frontend connections are now actively closed by default. For some users at high loads it caused an issue with many clients disconnecting and reconnecting at the same time, so we've added an option to disable the mechanism and in 2.6 we also have the ability to indicate over what period we want all older connections to be closed (this should satisfy everyone).

So I think that in recent versions you'll have the solution to your problem (2.6 is still not release but 2.5 is). Would you like to give it a try ?

wtarreau avatar Apr 14 '22 18:04 wtarreau

Thank you for the update. We will test the new version.

atitow avatar Apr 21 '22 17:04 atitow

any news about this issue ?

capflam avatar Aug 25 '22 10:08 capflam

Hello Christopher,

Unfortunately I did not have time so far to re-execute the tests.

I will prioritize the task to complete the tests by the end of September.

Best regards, Alexey

On Thu, 25 Aug 2022 at 12:34, Christopher Faulet @.***> wrote:

any news about this issue ?

— Reply to this email directly, view it on GitHub https://github.com/haproxy/haproxy/issues/1078#issuecomment-1227081268, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQLBA2TEQRTGFVDXC3LV7PTV25DZ7ANCNFSM4WSOVNVQ . You are receiving this because you were mentioned.Message ID: @.***>

atitow avatar Aug 26 '22 14:08 atitow

No problem. It was just a gentle ping to be sure the issue is still alive.

capflam avatar Aug 26 '22 15:08 capflam

Sorry, just a gentle ping ? :)

capflam avatar Jan 27 '23 14:01 capflam

Hello Christopher,

Unfortunately I still did not have time to re-execute the tests.

Could you give me time till the end of February, please?

Best regards, Alexey

On Fri, 27 Jan 2023 at 15:42, Christopher Faulet @.***> wrote:

Sorry, just a gentle ping ? :)

— Reply to this email directly, view it on GitHub https://github.com/haproxy/haproxy/issues/1078#issuecomment-1406596027, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQLBA2SUAHLNOEQPJNWWIITWUPNGLANCNFSM4WSOVNVQ . You are receiving this because you were mentioned.Message ID: @.***>

atitow avatar Feb 01 '23 14:02 atitow

@atitow, no problem. As I said before, I just want to be sure the issue is not dead. I'm ok to keep it open of course, as long as necessary

capflam avatar Feb 02 '23 08:02 capflam

Thanks a lot!

On Thu, 2 Feb 2023 at 09:11, Christopher Faulet @.***> wrote:

@atitow https://github.com/atitow, no problem. As I said before, I just want to be sure the issue is not dead. I'm ok to keep it open of course, as long as necessary

— Reply to this email directly, view it on GitHub https://github.com/haproxy/haproxy/issues/1078#issuecomment-1413310203, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQLBA2T6BYETSODCBMTQIRLWVNTZXANCNFSM4WSOVNVQ . You are receiving this because you were mentioned.Message ID: @.***>

atitow avatar Feb 02 '23 08:02 atitow

It's me again :) any news ?

capflam avatar May 30 '23 15:05 capflam

Hello Christopher!

Unfortunately we still have not tested the fix.

We are now migrating our AWS instances to Ubuntu 22.04 and will execute the tests as a part of this migration.

Sorry again for the delay.

Best regards, Alexey Titov

On Tue, 30 May 2023 at 17:46, Christopher Faulet @.***> wrote:

It's me again :) any news ?

— Reply to this email directly, view it on GitHub https://github.com/haproxy/haproxy/issues/1078#issuecomment-1568669358, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQLBA2X534OYOVAH5FSUMN3XIYI3VANCNFSM4WSOVNVQ . You are receiving this because you were mentioned.Message ID: @.***>

atitow avatar Jun 01 '23 14:06 atitow

No problem. Thanks !

capflam avatar Jun 02 '23 07:06 capflam

I've applied the same configuration, but each time I make changes and reload the service, new stick data is generated. I'd like to retain the existing stick data until it expires. Is there a way to achieve this?

Prudhvi1357 avatar Feb 12 '24 19:02 Prudhvi1357

@Prudhvi1357 Please can you provide the output of the current haproxy -vv which is in use and the current config as this issue is quite old and some new infos are required.

git001 avatar Feb 12 '24 19:02 git001

HAProxy version 2.9.4-1ppa1~focal 2024/02/01 - https://haproxy.org/ Status: stable branch - will stop receiving fixes around Q1 2025. Known bugs: http://www.haproxy.org/bugs/bugs-2.9.4.html Running on: Linux 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:15:01 UTC 2024 x86_64 Build options : TARGET = linux-glibc CPU = generic CC = cc CFLAGS = -O2 -g -O2 -fdebug-prefix-map=/build/haproxy-jpXSNL/haproxy-2.9.4=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wall -Wextra -Wundef -Wdeclaration-after-statement -Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1 USE_SYSTEMD=1 USE_QUIC=1 USE_PROMEX=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_QUIC_OPENSSL_COMPAT=1 DEBUG = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC +LIBCRYPT +LINUX_CAP +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_AWSLC -OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL -PROCCTL +PROMEX -PTHREAD_EMULATION +QUIC +QUIC_OPENSSL_COMPAT +RT +SHM_OPEN +SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL -ZLIB

Default settings : bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=4). Built with OpenSSL version : OpenSSL 1.1.1f 31 Mar 2020 Running on OpenSSL version : OpenSSL 1.1.1f 31 Mar 2020 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3 Built with Lua version : Lua 5.3.3 Built with the Prometheus exporter as a service Built with network namespace support. Built with libslz for stateless compression. Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Built with PCRE2 version : 10.34 2019-11-21 PCRE2 library supports JIT : yes Encrypted password support via crypt(3): yes Built with gcc compiler version 9.4.0

Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll.

Available multiplexer protocols : (protocols marked as cannot be specified using 'proto' keyword) quic : mode=HTTP side=FE mux=QUIC flags=HTX|NO_UPG|FRAMED h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG fcgi : mode=HTTP side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG : mode=HTTP side=FE|BE mux=H1 flags=HTX h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG : mode=TCP side=FE|BE mux=PASS flags= none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG

Available services : prometheus-exporter Available filters : [BWLIM] bwlim-in [BWLIM] bwlim-out [CACHE] cache [COMP] compression [FCGI] fcgi-app [SPOE] spoe [TRACE] trace

here is result

Prudhvi1357 avatar Feb 12 '24 19:02 Prudhvi1357

here is my config as well

global log /dev/log local0 log /dev/log local1 notice maxconn 500000 stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats socket /run/haproxy.sock mode 660 level admin user haproxy group haproxy daemon

defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 50000 timeout server 50000 peers mypeers bind :10000 server 127.0.0.1 #frontend socketio_frontend

bind *:80

frontend socketio_frontend_http bind *:80 mode http redirect scheme https code 301 if !{ ssl_fc }

frontend socketio_frontend_https bind *:443 ssl crt /etc/ssl/certs/onpassive.pem mode http

# Use ACLs to check for the roomID parameter in the URL
#acl has_roomID urlp(roomID) -m found
default_backend room_servers

backend room_servers #balance random # Use the balance random directive for random server selection #balance roundrobin # Use a different algorithm for random selection if needed balance url_param roomId stick-table type string len 256 size 200k expire 12h peers mypeers stick store-request url_param(roomId) stick match url_param(roomId) stick on cookie(roomId) #acl unique_roomId hdr_cnt(roomId) eq 0 #http-request track-sc1 hdr(roomId) #http-request deny if unique_roomId stick on urlp(roomId) server i-06798925e549f047e 12.45.23.45:3443 check ssl verify none server i-0b296f438e31a6e74 13.125.34.56:3443 check ssl verify none

# Add more backend servers as needed

listen stats # HAProxy stats listener (for monitoring) bind *:8080 stats enable stats uri /stats stats realm HAProxy Statistics stats auth admin:admin123

Prudhvi1357 avatar Feb 12 '24 19:02 Prudhvi1357

Hi Any Luck?

Prudhvi1357 avatar Feb 13 '24 02:02 Prudhvi1357

Sorry but I'm confused. Is this a continuation of the previous issue or a different report ?

wtarreau avatar Feb 13 '24 07:02 wtarreau