caddy icon indicating copy to clipboard operation
caddy copied to clipboard

Using proxy_protocol v2 with h2c backend gives wrong IP address to backend.

Open CRCinAU opened this issue 1 year ago • 19 comments

I recently moved over to Caddy as a frontend for one of my sites.

Extract of the Caddyfile:

example.com {
        header Strict-Transport-Security "max-age=63072000"
        header -Server

        handle_path /forum/* {
                reverse_proxy http://<host2>:8000
        }

        reverse_proxy h2c://<docker_container_name>:80 {
                transport http {
                        proxy_protocol v2
                }
        }
}

When configured as above, after a random number of hits, the source IP addresses logged in the reverse proxy will all be the same. This includes ANY host - IPv4 or IPv6.

Changing to use http:// as the backend as follows seems to report the source IP address correctly:

example.com {
        header Strict-Transport-Security "max-age=63072000"
        header -Server

        handle_path /forum/* {
                reverse_proxy http://<host2>:8000
        }

        reverse_proxy http://<docker_container_name>:80 {
                transport http {
                        proxy_protocol v2
                }
        }
}

Versions:

/srv # caddy --version
v2.7.6 h1:w0NymbG2m9PcvKWsrXO6EEkY9Ru4FJK8uQbYcev1p3A=

CRCinAU avatar May 26 '24 03:05 CRCinAU

Ah that makes sense. The connections to the backend are pooled I think, so subsequent requests might appear to come from the same IP as the first backend. I'm not sure if we have a way to turn off pooling in h2c mode right now.

francislavoie avatar May 26 '24 03:05 francislavoie

The ntlm-transport does pooling per remote IP address. I wonder if the mechanism can be copied into core for this use case. Now that I said that, I realize the same logic probably covers both NTLM and proxy-protocol + h2c.

mohammed90 avatar May 26 '24 06:05 mohammed90

@mohammed90 I always thought we should pool connections for the same client ip if proxy protocol is enabled instead of blindly disabling keep-alive. Tried to implement custom pooling but gave up. That package gives me some inspiration.

WeidiDeng avatar May 27 '24 01:05 WeidiDeng

@CRCinAU Can you try xcaddy build h2c-proxy-protocol to see if it's fixed?

WeidiDeng avatar May 27 '24 06:05 WeidiDeng

I'm not really familiar enough with Caddy to be able to pull this off - I've only ever used the docker container from docker hub to use Caddy. Is there any way to bring this into the existing docker container?

CRCinAU avatar May 27 '24 07:05 CRCinAU

Run following dockerfile:

FROM caddy:2.7.6-builder AS builder

RUN xcaddy build h2c-proxy-protocol

FROM caddy:2.7.6

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

The resulting image contains this patch. And you can copy the binary.

WeidiDeng avatar May 27 '24 07:05 WeidiDeng

I tried running this with h2c:// - but caddy just seemed to hang when talking to the backend... Nothing seemed to make it through to the client.

CRCinAU avatar May 27 '24 07:05 CRCinAU

Any logs available? Please enable debug in the global option.

WeidiDeng avatar May 27 '24 07:05 WeidiDeng

I'm giving this a go now... I tried it on my main web site - but as it died, I just rolled back straight away.

Here's the logs I see from just random internet traffic hitting the site when using h2c:// to the backend.

{"level":"debug","ts":1716813182.8682232,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"httpd:80","total_upstreams":1}
{"level":"debug","ts":1716813182.868803,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813182.869208,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813183.8701084,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813185.8720198,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813189.8733985,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813190.4139547,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"83.229.76.239","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813197.8786793,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813206.4157789,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"83.229.76.239","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}
{"level":"debug","ts":1716813213.882467,"logger":"http.reverse_proxy.transport.http","msg":"sending proxy protocol header v2","header":{"Version":2,"Command":33,"TransportProtocol":17,"SourceAddr":{"IP":"45.87.9.222","Port":0,"Zone":""},"DestinationAddr":{"IP":"0.0.0.0","Port":0,"Zone":""}}}

CRCinAU avatar May 27 '24 12:05 CRCinAU

You need to configure it like this (see https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#the-http-transport):

	reverse_proxy h2c://<docker_container_name>:80 {
		transport http {
			keepalive off
			proxy_protocol v2
		}
	}

francislavoie avatar May 27 '24 15:05 francislavoie

You need to configure it like this (see https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#the-http-transport):

	reverse_proxy h2c://<docker_container_name>:80 {
		transport http {
			keepalive off
			proxy_protocol v2
		}
	}

This does actually seem to work - ie the connection doesn't hang - on the caddy:latest image - which seems to be:

docker exec -ti caddy /bin/sh
/srv # caddy --version
v2.7.6 h1:w0NymbG2m9PcvKWsrXO6EEkY9Ru4FJK8uQbYcev1p3A=
/srv # 

However, even in this configuration, the wrong remote IP address is displayed by apache - which is the start of this bug report.

Trying this again with the instructions above ( https://github.com/caddyserver/caddy/issues/6342#issuecomment-2132810869 ), the connection still hangs.

Eventually, I get a 502 timeout from the backend:

{
  "level": "error",
  "ts": 1716824790.699319,
  "logger": "http.log.error",
  "msg": "http2: client conn not usable",
  "request": {
    "remote_ip": "<my ipv6 address>",
    "remote_port": "51640",
    "client_ip": "<my ipv6 address>",
    "proto": "HTTP/3.0",
    "method": "GET",
    "host": "<fqdn>",
    "uri": "/",
    "headers": {
      "Sec-Fetch-Dest": [
        "document"
      ],
      "Accept-Language": [
        "en-GB,en;q=0.6"
      ],
      "Sec-Fetch-Mode": [
        "navigate"
      ],
      "Sec-Fetch-User": [
        "?1"
      ],
      "Sec-Ch-Ua-Platform": [
        "\"Linux\""
      ],
      "Upgrade-Insecure-Requests": [
        "1"
      ],
      "Accept": [
        "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8"
      ],
      "Sec-Gpc": [
        "1"
      ],
      "Sec-Fetch-Site": [
        "same-origin"
      ],
      "Referer": [
        "https://<fqdn>"
      ],
      "Accept-Encoding": [
        "gzip, deflate, br, zstd"
      ],
      "Sec-Ch-Ua": [
        "\"Brave\";v=\"125\", \"Chromium\";v=\"125\", \"Not.A/Brand\";v=\"24\""
      ],
      "Sec-Ch-Ua-Mobile": [
        "?0"
      ],
      "Cookie": [
        "REDACTED"
      ],
      "Priority": [
        "u=0, i"
      ],
      "Cache-Control": [
        "max-age=0"
      ],
      "User-Agent": [
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36"
      ]
    },
    "tls": {
      "resumed": false,
      "version": 772,
      "cipher_suite": 4867,
      "proto": "h3",
      "server_name": "<fqdn>"
    }
  },
  "duration": 64.009204212,
  "status": 502,
  "err_id": "07ygq06wq",
  "err_trace": "reverseproxy.statusError (reverseproxy.go:1269)"
}

CRCinAU avatar May 27 '24 15:05 CRCinAU

You need to configure it like this (see https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#the-http-transport):

	reverse_proxy h2c://<docker_container_name>:80 {
		transport http {
			keepalive off
			proxy_protocol v2
		}
	}

@francislavoie keepalive is disabled if proxy_protocol is in use.

WeidiDeng avatar May 28 '24 00:05 WeidiDeng

Ah, I see yeah

https://github.com/caddyserver/caddy/blob/77394f2f66195771d7437ff54c6ddd1a37cf2a90/modules/caddyhttp/reverseproxy/httptransport.go#L342-L348

So just need to use the new build to test I guess.

francislavoie avatar May 28 '24 00:05 francislavoie

I think this is a stdlib issue,

https://github.com/golang/net/blob/022530c41555839e27aec3868cc480fb7b5e33d4/http2/transport.go#L1028

However h2c requests start with a streamID of 3.

https://github.com/golang/net/blob/022530c41555839e27aec3868cc480fb7b5e33d4/http2/transport.go#L836

So requests never get sent in this case.

WeidiDeng avatar May 28 '24 01:05 WeidiDeng

@CRCinAU Can you try with xcaddy build h2c-proxy-protocol --replace golang.org/x/net=github.com/WeidiDeng/net@h2c-disable-keepalive? You'll need to update the caddy image version to 2.8.0 since older version can't build with this method.

WeidiDeng avatar May 30 '24 02:05 WeidiDeng

@WeidiDeng - Sorry, I'm not the best with Caddy or its build process - I tried the following modified Dockerfile based on the above:

FROM caddy:2.8.0-builder AS builder

RUN xcaddy build h2c-proxy-protocol --replace golang.org/x/net=github.com/WeidiDeng/net@h2c-disable-keepalive

FROM caddy:2.8.0

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

However it complains there isn't a tag for caddy:2.8.0 or caddy:2.8.0-builder

ie:

ERROR: failed to solve: caddy:2.8.0-builder: failed to resolve source metadata for docker.io/library/caddy:2.8.0-builder: no match for platform in manifest: not found

Looking at the tags on Docker Hub, I can see that there is a builder and latest tag updated 2 hours ago, and they do have linux/amd64 images listed. Would it be ok to use :builder and :latest for this test?

EDIT: I noticed there are images for caddy:2.8-builder and caddy:2.8 - which I tried, but that errored out with:

 => ERROR [builder 2/2] RUN xcaddy build h2c-proxy-protocol --replace golang.org/x/net=github.com/WeidiDeng/net@h2c-disable-keepalive
[ERROR] missing flag; caddy version already set at h2c-proxy-protocol

CRCinAU avatar May 30 '24 03:05 CRCinAU

That builder has the wrong xcaddy version 0.4.1 instead of the latest 0.4.2. It'll be a while before the docker is ready.

WeidiDeng avatar May 30 '24 03:05 WeidiDeng

@CRCinAU 2.8.0-builder is ready now, you can try again.

WeidiDeng avatar May 30 '24 04:05 WeidiDeng

Built ok now, and I can confirm I'm seeing HTTP/2.0 requests to the backend, and it looks to be the correct IP address reported to the backend via the proxy protocol.

CRCinAU avatar May 30 '24 04:05 CRCinAU