caddy
caddy copied to clipboard
possible bug with 1xx status code handling
Reference: https://caddy.community/t/caddy-2-6-x-stop-webdav-client-to-finish-uploading/17975
"It’s about client receiving 100 status twice. Once from stdlib, another from reverse proxy."
If someone manages to add a failing test, I can take a look!
Where should a failing test added?
@helmut72 Do you have a really simple reproducer? Ideally just a couple of lines of Caddyfile. That'd be a good start.
Caddyfile(s) of my complete configuration for this specific site, which serves a webdav service:
{
servers :443
servers :8448
#debug
admin :2019
}
import snippets/*.conf
import sites/*.conf
snippets/wildcard.conf
(wildcard) {
tls /letsencrypt/certificates/_.example.com.crt /letsencrypt/certificates/_.example.com.key
}
snippets/client_headers.conf
(client_headers) {
header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
header -Server
}
snippets/backend_headers.conf
(backend_headers) {
header_up X-Real-IP {remote_host}
header_up X-Forwarded-Port {args.0}
}
sites/webdav.conf
webdav.example.com:443 {
import wildcard
import client_headers
reverse_proxy http://192.168.1.3:6061 {
import backend_headers 443
}
}
I don't use Caddy for certs because I need certs for IMAP, SMTP, FTPS and LDAP too, my provider doesn't have a DNS API and my own script around acme-lego works (came from Apache).
On 192.168.1.3:6061 a webdav server listens:
compose:
services:
sftpgo:
container_name: webdav
image: drakkan/sftpgo:v2-alpine-slim
restart: always
ports:
- 5041:5041
- 6061:6061
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /docker/data/sync/backup:/backup
- /docker/data/sync/data:/var/lib/sftpgo
- /docker/files/webdav.example.com:/files
env_file:
- sftpgo.env
sftpgo.env
TZ=Europe/Berlin
# Common
SFTPGO_COMMON__IDLE_TIMEOUT=15
SFTPGO_COMMON__UPLOAD_MODE=2
SFTPGO_COMMON__TEMP_PATH=/files/.temp
# Data Provider
SFTPGO_DATA_PROVIDER__USERS_BASE_DIR=/files
SFTPGO_DATA_PROVIDER__CREATE_DEFAULT_ADMIN=1
# Default
SFTPGO_DEFAULT_ADMIN_USERNAME=admin
SFTPGO_DEFAULT_ADMIN_PASSWORD=password
# HTTPD/Webinterface
SFTPGO_HTTPD__BACKUPS_PATH=/backup
SFTPGO_HTTPD__BINDINGS__0__ADDRESS=0.0.0.0
SFTPGO_HTTPD__BINDINGS__0__PORT=5041
SFTPGO_HTTPD__BINDINGS__0__HIDE_LOGIN_URL=3
SFTPGO_HTTPD__BINDINGS__0__ENABLE_WEB_CLIENT=0
SFTPGO_HTTPD__BINDINGS__0__PROXY_ALLOWED=0.0.0.0/0
SFTPGO_HTTPD__BINDINGS__0__CLIENT_IP_PROXY_HEADER="X-Forwarded-For"
# Defender
SFTPGO_COMMON__DEFENDER__ENABLED=1
SFTPGO_COMMON__DEFENDER__BAN_TIME=5
SFTPGO_COMMON__DEFENDER__BAN_TIME_INCREMENT=10
SFTPGO_COMMON__DEFENDER__THRESHOLD=5
# Log
SFTPGO_LOG_FILE_PATH=
#SFTPGO_LOG_VERBOSE=1
SFTPGO_LOG_VERBOSE=0
# --- Protocols ---
# WebDAV
SFTPGO_WEBDAVD__BINDINGS__0__PORT=6061
SFTPGO_WEBDAVD__BINDINGS__0__PROXY_ALLOWED=0.0.0.0/0
SFTPGO_WEBDAVD__BINDINGS__0__CLIENT_IP_PROXY_HEADER="X-Forwarded-For"
That's more complex than I think Matt was hoping for regarding a reproducible setup. There shouldn't be any special TLS stuff, only one Caddyfile (no imports), and ideally with no Docker involved.
Here is a producible example with rclone (because both sftpgo and rclone uses golang/x/net/webdav for webdav handling)
rclone version 1.16
caddy version 2.6.2
- Run rclone
rclone serve webdav .
- Run caddy with Caddyfile
example.com {
reverse_proxy http://127.0.0.1:8080
}
- Running following golang code
config := &tls.Config{ServerName: "example.com"}
file, _ := os.Open("/example/path")
conn, _ := tls.Dial("tcp", "example.com:443", config)
req, _ := http.NewRequest(http.MethodPut, "https://example.com/example", file)
req.Header.Set("Expect", "100-continue")
_ = req.Write(conn)
_, _ = io.Copy(os.Stdout, conn)
_ = conn.Close()
The output is following
HTTP/1.1 100 Continue
HTTP/1.1 100 Continue
Alt-Svc: h3=":443"; ma=2592000
Server: Caddy
HTTP/1.1 201 Created
Alt-Svc: h3=":443"; ma=2592000
Content-Length: 7
Content-Type: text/plain; charset=utf-8
Date: Sat, 07 Jan 2023 12:27:06 GMT
Etag: "173806397e1ebc4da2c2506"
Server: Caddy
Created
Using go source file (25MB) as upload file, the program will hang for 5 minutes before CREATED
and crlf before it shows up. Interestingly, 5 minutes is the default idle timeout for caddy, there is definitely something wrong. The problem may not be 1XX handling, but why caddy not flush response body until the last moment.
The relevant log on the server is
{"level":"info","ts":1673247409.723655,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"1.1.1.1","remote_port":"0","proto":"HTTP/1.1","method":"PUT","host":"example.com","uri":"/go.tar.gz","headers":{"User-Agent":["Go-http-client/1.1"],"Expect":["100-continue"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"","server_name":"example.com"}},"user_id":"","duration":1.852408477,"size":7,"status":201,"resp_headers":{"Server":["Caddy"],"Alt-Svc":["h3=\":443\"; ma=2592000"],"Etag":["\"173891a714fd57bc18f0d0c\""],"Date":["Mon, 09 Jan 2023 06:56:49 GMT"],"Content-Length":["7"],"Content-Type":["text/plain; charset=utf-8"]}}
Previous upload file is 134MB big, and the response is received the instant it is uploaded.
There shouldn't be any special TLS stuff, only one Caddyfile (no imports), and ideally with no Docker involved.
Completely without TLS, one Caddyfile and no Docker:
- Get Caddy 2.6.2 binary from https://github.com/caddyserver/caddy/releases
- Start Caddy with
./caddy run --config ./Caddyfile --adapter caddyfile
caddyfile
{
auto_https off
servers :80
debug
}
:80 {
log
reverse_proxy http://localhost:6080
}
webdav server
- Get latest SFTPgo binary from https://github.com/drakkan/sftpgo/releases
- Start WebDAV on the same server with
./sftpgo portable --webdav-port 6080 --username user --password user --permissions "*" --directory /tmp
client
- Install Banking4 trial, create a data file, setup Webdav to http://<ip_of_caddy with username and password
user
.
https://subsembly.com/buybanking4.html https://caddy.community/t/caddy-2-6-x-stop-webdav-client-to-finish-uploading/17975/9?u=helmut72
result
Upload hangs. Logs (client 192.168.1.178, server 192.168.1.177):
2023/01/09 08:08:13.610 DEBUG http.handlers.reverse_proxy selected upstream {"dial": "localhost:6080", "total_upstreams": 1}
2023/01/09 08:08:18.063 DEBUG http.handlers.reverse_proxy upstream roundtrip {"upstream": "localhost:6080", "duration": 4.452984914, "request": {"remote_ip": "192.168.1.178", "remote_port": "49812", "proto": "HTTP/1.1", "method": "PUT", "host": "192.168.1.177", "uri": "/Meine%20Banken.sub", "headers": {"Authorization": [], "Content-Type": ["application/octet-stream"], "Content-Length": ["3559936"], "X-Forwarded-For": ["192.168.1.178"], "X-Forwarded-Proto": ["http"], "X-Forwarded-Host": ["192.168.1.177"], "User-Agent": ["Mozilla/5.0 (Subsembly.Interweb)"], "Expect": ["100-continue"]}}, "headers": {"Etag": ["\"1738958ddad9b10f365200\""], "Date": ["Mon, 09 Jan 2023 08:08:18 GMT"], "Content-Length": ["7"], "Content-Type": ["text/plain; charset=utf-8"]}, "status": 201}
2023/01/09 08:08:18.064 INFO http.log.access handled request {"request": {"remote_ip": "192.168.1.178", "remote_port": "49812", "proto": "HTTP/1.1", "method": "PUT", "host": "192.168.1.177", "uri": "/Meine%20Banken.sub", "headers": {"Content-Length": ["3559936"], "User-Agent": ["Mozilla/5.0 (Subsembly.Interweb)"], "Expect": ["100-continue"], "Authorization": [], "Content-Type": ["application/octet-stream"]}}, "user_id": "", "duration": 4.453865043, "size": 7, "status": 201, "resp_headers": {"Server": ["Caddy"], "Etag": ["\"1738958ddad9b10f365200\""], "Date": ["Mon, 09 Jan 2023 08:08:18 GMT"], "Content-Length": ["7"], "Content-Type": ["text/plain; charset=utf-8"]}}
Same test with Caddy 2.5.2 binary. Result: Upload works. Logs (client 192.168.1.178, server 192.168.1.177):
2023/01/09 08:12:23.831 DEBUG http.handlers.reverse_proxy selected upstream {"dial": "localhost:6080", "total_upstreams": 1}
2023/01/09 08:12:24.258 DEBUG http.handlers.reverse_proxy upstream roundtrip {"upstream": "localhost:6080", "duration": 0.426950971, "request": {"remote_ip": "192.168.1.178", "remote_port": "49823", "proto": "HTTP/1.1", "method": "PUT", "host": "192.168.1.177", "uri": "/Meine%20Banken.sub", "headers": {"X-Forwarded-Host": ["192.168.1.177"], "User-Agent": ["Mozilla/5.0 (Subsembly.Interweb)"], "Expect": ["100-continue"], "Authorization": [], "Content-Type": ["application/octet-stream"], "Content-Length": ["3559936"], "X-Forwarded-For": ["192.168.1.178"], "X-Forwarded-Proto": ["http"]}}, "headers": {"Etag": ["\"173895c72cc72901365200\""], "Date": ["Mon, 09 Jan 2023 08:12:24 GMT"], "Content-Length": ["7"], "Content-Type": ["text/plain; charset=utf-8"]}, "status": 201}
2023/01/09 08:12:24.259 INFO http.log.access handled request {"request": {"remote_ip": "192.168.1.178", "remote_port": "49823", "proto": "HTTP/1.1", "method": "PUT", "host": "192.168.1.177", "uri": "/Meine%20Banken.sub", "headers": {"Authorization": [], "Content-Type": ["application/octet-stream"], "Content-Length": ["3559936"], "User-Agent": ["Mozilla/5.0 (Subsembly.Interweb)"], "Expect": ["100-continue"]}}, "user_id": "", "duration": 0.428541222, "size": 7, "status": 201, "resp_headers": {"Etag": ["\"173895c72cc72901365200\""], "Date": ["Mon, 09 Jan 2023 08:12:24 GMT"], "Content-Length": ["7"], "Content-Type": ["text/plain; charset=utf-8"], "Server": ["Caddy"]}}
With golang 1.20rc2, it can be reproduced with stdlib reverse_proxy, where 1XX commits are merged.
replace run caddy with
target, _ := url.Parse("http://localhost:8080")
proxy := httputil.NewSingleHostReverseProxy(target)
panic(http.ListenAndServe(":80", proxy))
conn, _ := net.Dial("tcp", "127.0.0.1:80")
file, _ := os.Open("/tmp/go1.20rc2.src.tar.gz")
req, _ := http.NewRequest(http.MethodPut, "http://127.0.0.1/go.tar.gz", file)
req.Header.Set("Expect", "100-continue")
_ = req.Write(conn)
_, _ = io.Copy(os.Stdout, conn)
I believe it's a bug with stdlib not flushing.
I have a minimum reproducible example with https://github.com/golang/go/issues/57696.
@WeidiDeng Wow, excellent detective work. Thank you for working on this!
@WeidiDeng Thank you for your support! I would never be able to find this out.
Is there any news or workaround for this topic? I am running into the same issue.
I don't wanted to use Caddy 2.5.2 forever, regardless if it's a private/home instance. Of course I had the same problem with Traefik, but it's worth noting that the problem only started with a later Golang compiler version.
My workaround is using HAProxy. But I'd like to switch back to Caddy, because configuration feels like from a different age and isn't as flexible as Caddy.
As the author of the feature both in Go and Caddy, I'll try to fix this bug but no ETA :/
@helmut72 @Juliku Can you try adding header_up -Expect
as a workaround for now? Just as you would for passing extra headers, but this removes this specific header.
@dunglas Double 100 continue can be reproduced 100% locally:
type delayWriter struct {
http.ResponseWriter
}
func (w *delayWriter) WriteHeader(status int) {
time.Sleep(time.Second)
w.ResponseWriter.WriteHeader(status)
}
func (w *delayWriter) Write(b []byte) (int, error) {
time.Sleep(time.Second)
return w.ResponseWriter.Write(b)
}
func TestReverse(t *testing.T) {
go http.ListenAndServe("127.0.0.1:1234", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = io.Copy(io.Discard, r.Body)
(&delayWriter{w}).WriteHeader(http.StatusNoContent)
}))
u, _ := url.Parse("http://127.0.0.1:1234")
rp := httputil.NewSingleHostReverseProxy(u)
go http.ListenAndServe("127.0.0.1:1235", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
rp.ServeHTTP(&delayWriter{w}, r)
}))
time.Sleep(time.Second)
ctx := httptrace.WithClientTrace(context.Background(), &httptrace.ClientTrace{
Got100Continue: func() {
t.Log("100 received")
},
Got1xxResponse: func(code int, header textproto.MIMEHeader) error {
t.Log(code)
return nil
},
})
req, _ := http.NewRequestWithContext(ctx, "GET", "http://127.0.0.1:1235/", strings.NewReader("55522"))
req.Header.Set("Expect", "100-Continue")
resp, _ := http.DefaultClient.Do(req)
_ = resp.Write(os.Stderr)
}
But I don't think this is necessary a bug, rfc 9110 says the client must be able to handle multiple 1xx response.
@helmut72 @Juliku Can you try adding
header_up -Expect
as a workaround for now? Just as you would for passing extra headers, but this removes this specific header.@dunglas Double 100 continue can be reproduced 100% locally:
type delayWriter struct { http.ResponseWriter } func (w *delayWriter) WriteHeader(status int) { time.Sleep(time.Second) w.ResponseWriter.WriteHeader(status) } func (w *delayWriter) Write(b []byte) (int, error) { time.Sleep(time.Second) return w.ResponseWriter.Write(b) } func TestReverse(t *testing.T) { go http.ListenAndServe("127.0.0.1:1234", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { _, _ = io.Copy(io.Discard, r.Body) (&delayWriter{w}).WriteHeader(http.StatusNoContent) })) u, _ := url.Parse("http://127.0.0.1:1234") rp := httputil.NewSingleHostReverseProxy(u) go http.ListenAndServe("127.0.0.1:1235", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { rp.ServeHTTP(&delayWriter{w}, r) })) time.Sleep(time.Second) ctx := httptrace.WithClientTrace(context.Background(), &httptrace.ClientTrace{ Got100Continue: func() { t.Log("100 received") }, Got1xxResponse: func(code int, header textproto.MIMEHeader) error { t.Log(code) return nil }, }) req, _ := http.NewRequestWithContext(ctx, "GET", "http://127.0.0.1:1235/", strings.NewReader("55522")) req.Header.Set("Expect", "100-Continue") resp, _ := http.DefaultClient.Do(req) _ = resp.Write(os.Stderr) }
But I don't think this is necessary a bug, rfc 9110 says the client must be able to handle multiple 1xx response.
Thanks a lot, "-Expect" worked like a charm.
But I don't think this is necessary a bug, rfc 9110 says the client must be able to handle multiple 1xx response.
According to Daniel Stenberg (curl developer), it is https://stackoverflow.com/a/22830120
This is also my understanding of
The server intends to send a final response after the request has been fully received and acted upon.
Either way, is there some way for a http client to work around this? Caddy is a widely used proxy and getting a third party operator to change their config is... optimistic.
@Jannik2099 that's an answer from 2014 on SO, before it became commonplace to use the status this way I think. We need a more recent opinion I think.