caddy
caddy copied to clipboard
remote_ip request matcher not working (sometimes)
Hi, I have encountered a weird behavior in our use case. We're using Caddy2 as a reverse proxy behind Google cloud TCP load balancer. It is official docker hub image hosted using kubernetes as single replica deployment. However, the remote_ip request matcher sometimes does not work. As mentioned in the log entries below. By making requests to caddy config API, I confirmed that the json config sent in response corresponds with the Caddyfile provided and does not change between requests. How do I proceed to debug this? I've tried using the "debug" global directive, but unfortunately it seems that only the requests not failing are logged on "level: debug".
Version
both 2.5.0 and 2.4.6 docker images seem to be affected equally
Config
Caddy is being configured via Caddyfile which looks like following snippet:
{
grace_period 20s
}
redacted.abc.net {
log {
output stdout
}
reverse_proxy apache-redacted:8088
}
redacted2.abc.net:80 {
log {
output stdout
}
@ipfilter {
remote_ip 10.110.0.0/16 66.66.66.66 77.77.77.77
}
handle @ipfilter {
reverse_proxy /* redacted2:80
}
respond "403 Forbidden" 403
}
.....
// couple dozens more sites server in similar fashion, some are plain reverse_proxy, some have basic auth, some IP filtering, some do have custom certs.
Observed (unexpected) behavior
As you can see from the log entry, I'm doing GET request on homepage of redacted2.abc.net:80 from IP within @ipfilter ranges and I'm getting status 0 (apparently translates to HTTP status 200), zero content length and no response headers from the backend. This is also the same and consistent behavior for clients within ranges outside of @ipfilter.
{
"level":"info",
"ts":1651784040.1112902,
"logger":"http.log.access.log106",
"msg":"handled request",
"request":{
"remote_ip":"77.77.77.77",
"remote_port":"32958",
"proto":"HTTP/1.1",
"method":"GET",
"host":"redacted2.abc.net",
"uri":"/",
"headers":{
"Accept":[
"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8"
],
"Accept-Encoding":[
"gzip, deflate"
],
"Connection":[
"keep-alive"
],
"Upgrade-Insecure-Requests":[
"1"
],
"User-Agent":[
"Mozilla/5.0 (X11; Linux x86_64; rv:97.0) Gecko/20100101 Firefox/97.0"
],
"Accept-Language":[
"cs,en;q=0.7,en-US;q=0.3"
],
"Sec-Gpc":[
"1"
],
"Cache-Control":[
"max-age=0"
]
}
},
"user_id":"",
"duration":0.00004606,
"size":0,
"status":0,
"resp_headers":{
"Server":[
"Caddy"
]
}
}
Observed (expected) behavior
This time, the request was handled properly, the response was 200 and there is a bunch of headers from the backend. The client IP is still the same, using the same browser, caddy process and config, just a few moments apart.
{
"level":"info",
"ts":1651784042.5086,
"logger":"http.log.access.log106",
"msg":"handled request",
"request":{
"remote_ip":"77.77.77.77",
"remote_port":"32960",
"proto":"HTTP/1.1",
"method":"GET",
"host":"redacted2.abc.net",
"uri":"/",
"headers":{
"Accept":[
"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8"
],
"Accept-Encoding":[
"gzip, deflate"
],
"Connection":[
"keep-alive"
],
"Upgrade-Insecure-Requests":[
"1"
],
"User-Agent":[
"Mozilla/5.0 (X11; Linux x86_64; rv:97.0) Gecko/20100101 Firefox/97.0"
],
"Accept-Language":[
"cs,en;q=0.7,en-US;q=0.3"
],
"Sec-Gpc":[
"1"
],
"Cache-Control":[
"max-age=0"
]
}
},
"user_id":"",
"duration":0.082626527,
"size":6074,
"status":200,
"resp_headers":{
"X-Frame-Options":[
"SAMEORIGIN",
"SAMEORIGIN"
],
"Vary":[
"Cookie,Accept-Encoding"
],
"Feature-Policy":[
"geolocation none;midi none;notifications none;push none;sync-xhr none;microphone none;camera none;magnetometer none;gyroscope none;speaker self;vibrate none;fullscreen self;payment none;"
],
"X-Xss-Protection":[
"1; mode=block"
],
"X-CMS-Cache":[
"HIT"
],
"Permissions-Policy":[
"interest-cohort=()"
],
"Date":[
"Thu, 05 May 2022 20:54:02 GMT"
],
"Etag":[
"\"1651783290-1\""
],
"Server":[
"Caddy",
"Apache/2.4.25 (Debian)"
],
"Content-Type":[
"text/html; charset=utf-8"
],
"X-Generator":[
"..."
],
"Referrer-Policy":[
"strict-origin"
],
"Cache-Control":[
"public, max-age=600"
],
"Strict-Transport-Security":[
"max-age=31536000; includeSubDomains"
],
"Content-Length":[
"6074"
],
"Link":[
"<http://redacted2.abc.net/>; rel=\"canonical\",<http://redacted2.abc.net/>; rel=\"shortlink\""
],
"Last-Modified":[
"Thu, 05 May 2022 20:41:30 GMT"
],
"X-Content-Type-Options":[
"nosniff"
],
"Expires":[
"Sun, 19 Nov 1978 05:00:00 GMT"
],
"Content-Encoding":[
"gzip"
],
"Content-Language":[
"cs"
]
}
}
Can you craft a minimally reproducible example? You have a bunch of redactions which make it hard to confirm what's going on.
Moreover, since this impacts remote_ip, please do not redact the field in the log line. We need to know the structure of the IP address with which it fails.
Can you craft a minimally reproducible example? You have a bunch of redactions which make it hard to confirm what's going on.
I'll look into that.
Moreover, since this impacts
remote_ip, please do not redact the field in the log line. We need to know the structure of the IP address with which it fails.
The IP used in the topic was 185.24.21.118, but plenty more are affected, if not all.
Can you craft a minimally reproducible example? You have a bunch of redactions which make it hard to confirm what's going on.
I'll look into that.
Moreover, since this impacts
remote_ip, please do not redact the field in the log line. We need to know the structure of the IP address with which it fails.The IP used in the topic was 185.24.21.118, but plenty more are affected, if not all.
And what's in your config file?
Closing due to inactivity