cloudflared icon indicating copy to clipboard operation
cloudflared copied to clipboard

🐛cloudflared drops HTTP request body when Content-Length is missing (e.g. Docker blob upload PATCH)

Open tarasov65536 opened this issue 7 months ago • 4 comments

Describe the bug When using cloudflared as a tunnel proxy, HTTP requests without a Content-Length header but with a valid streaming body (e.g. Docker Registry PATCH uploads) are forwarded with Content-Length: 0, and the body is dropped entirely.

This breaks standard use cases like docker push, where the Docker client sends a PATCH request to upload a blob chunk without specifying the Content-Length (streamed body).

To Reproduce Steps to reproduce the behavior:

  1. Run a Docker Registry (or Gitea with Registry enabled) behind Caddy and Cloudflared.
  2. Connect a Docker client and attempt docker push via the Cloudflared tunnel.
  3. Observe that PATCH requests arrive with Content-Length: 0 and an empty body.
  4. Running the same setup without cloudflared allows proper streaming (Content-Length header is absent, body is received and Caddy reports bytes_read > 0).

Expected behavior Cloudflared should forward the streaming body as-is, even if Content-Length is missing. Many clients (including Docker) use this behavior per HTTP/1.1 spec by streaming the body and closing the connection or relying on chunked transfer internally.

Environment and versions

  • Cloudflared version: 2025.5.0
  • Docker version 28.1.1, build 4eba377
  • Origin: Gitea with Docker Registry support via Caddy

Logs and errors Here is what I see in cloudflared logs:

cloudflared  | 2025-06-04T13:38:03.782579727Z 2025-06-04T13:38:03Z DBG PATCH https://gitea.REDUCTED.com/v2/REDUCTED/REDUCTED-bot/blobs/uploads/8rs8iqdqyfdfmrhhi0u5rwcyf HTTP/1.1 connIndex=3 content-length=0 event=1 headers={"Accept-Encoding":["gzip, br"],"Authorization":["Bearer REDUCTED"],"Cdn-Loop":["cloudflare; loops=1"],"Cf-Connecting-Ip":["REDUCTED"],"Cf-Ipcountry":["REDUCTED"],"Cf-Ray":["94a7d4143926db9f-FRA"],"Cf-Visitor":["{\"scheme\":\"https\"}"],"Cf-Warp-Tag-Id":["4cef2302-fdce-442b-ae7a-8d6c87fc2385"],"Content-Type":["application/octet-stream"],"User-Agent":["docker/28.1.1 go/go1.23.8 git-commit/01f442b kernel/6.11.0-26-generic os/linux arch/amd64 UpstreamClient(Docker-Client/28.1.1 \\(linux\\))"],"X-Forwarded-For":["REDUCTED"],"X-Forwarded-Proto":["https"]} host=gitea.REDUCTED.com ingressRule=3 originService=http://proxy path=/v2/REDUCTED/REDUCTED-bot/blobs/uploads/8rs8iqdqyfdfmrhhi0u5rwcyf

And in caddy logs:

proxy  | 2025-06-04T13:38:03.784457902Z {"level":"info","ts":1749044283.7843971,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.18.0.8","remote_port":"54292","client_ip":"172.18.0.8","proto":"HTTP/1.1","method":"PATCH","host":"gitea.REDUCTED.com","uri":"/v2/REDUCTED/REDUCTED-bot/blobs/uploads/8rs8iqdqyfdfmrhhi0u5rwcyf","headers":{"Connection":["keep-alive"],"Content-Type":["application/octet-stream"],"User-Agent":["docker/28.1.1 go/go1.23.8 git-commit/01f442b kernel/6.11.0-26-generic os/linux arch/amd64 UpstreamClient(Docker-Client/28.1.1 \\(linux\\))"],"Cf-Ipcountry":["REDUCTED"],"Cf-Ray":["94a7d4143926db9f-FRA"],"Cf-Visitor":["{\"scheme\":\"https\"}"],"Content-Length":["0"],"Cdn-Loop":["cloudflare; loops=1"],"X-Forwarded-For":["REDUCTED"],"X-Forwarded-Proto":["https"],"Accept-Encoding":["gzip, br"],"Cf-Warp-Tag-Id":["4cef2302-fdce-442b-ae7a-8d6c87fc2385"],"Authorization":[],"Cf-Connecting-Ip":["REDUCTED"]}},"bytes_read":0,"user_id":"","duration":0.001772099,"size":0,"status":202,"resp_headers":{"Content-Length":["0"],"Docker-Distribution-Api-Version":["registry/2.0"],"Docker-Upload-Uuid":["8rs8iqdqyfdfmrhhi0u5rwcyf"],"Location":["/v2/REDUCTED/REDUCTED-bot/blobs/uploads/8rs8iqdqyfdfmrhhi0u5rwcyf"],"Range":["0--1"],"Date":["Wed, 04 Jun 2025 13:38:03 GMT"],"Server":["Caddy"]}}

And here is how transfer is happening when there is no cloudflared infront of caddy:

proxy  | 2025-06-04T13:39:43.717390924Z {"level":"info","ts":1749044383.7173061,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"172.18.0.1","remote_port":"36850","client_ip":"172.18.0.1","proto":"HTTP/1.1","method":"PATCH","host":"gitea.REDUCTED.com","uri":"/v2/REDUCTED/REDUCTED-bot/blobs/uploads/p1susmqoi63krmivh2reunytg","headers":{"User-Agent":["docker/28.1.1 go/go1.23.8 git-commit/01f442b kernel/6.11.0-26-generic os/linux arch/amd64 UpstreamClient(Docker-Client/28.1.1 \\(linux\\))"],"Authorization":[],"Content-Type":["application/octet-stream"],"Accept-Encoding":["gzip"],"Connection":["close"]}},"bytes_read":21173157,"user_id":"","duration":1.755927201,"size":0,"status":202,"resp_headers":{"Docker-Distribution-Api-Version":["registry/2.0"],"Docker-Upload-Uuid":["p1susmqoi63krmivh2reunytg"],"Location":["/v2/REDUCTED/REDUCTED-bot/blobs/uploads/p1susmqoi63krmivh2reunytg"],"Range":["0-21173156"],"Server":["Caddy"],"Date":["Wed, 04 Jun 2025 13:39:43 GMT"],"Content-Length":["0"]}}

Conclusion From these logs, it's clear that cloudflared injects a Content-Length: 0 header into PATCH requests that originally streamed a body without explicitly setting Content-Length. As a result, the origin server (Caddy) receives an empty request body (bytes_read: 0), breaking valid upload behavior such as Docker registry blob uploads. Without cloudflared, the same requests work properly, and the body is fully streamed and received (bytes_read: 21173157). This suggests that cloudflared interferes with or drops streamed request bodies in such cases.

tarasov65536 avatar Jun 04 '25 14:06 tarasov65536

To further isolate the issue, I built a simple HTTP server in .NET that accepts PATCH requests with Transfer-Encoding: chunked over HTTP/1.1 and writes the received body to a file.

Then I tested it with curl using two scenarios:

✅ Direct Request (No Cloudflared)

curl -v -X PATCH http://localhost:8080/upload \
  -H "Content-Type: application/octet-stream" \
  -H "Transfer-Encoding: chunked" \
  --http1.1 \
  --data-binary "@bigfile.bin"

Server output:

== Incoming Headers ==
Accept: */*
Host: localhost:8080
User-Agent: curl/8.5.0
Content-Type: application/octet-stream
Expect: 100-continue
Transfer-Encoding: chunked

Receiving file with Transfer-Encoding: chunked
File saved to: uploaded_file.bin
File size is: 45871104

❌ Request via Cloudflared

curl -v -X PATCH https://througu.cloudflared/upload \
  -H "Content-Type: application/octet-stream" \
  -H "Transfer-Encoding: chunked" \
  --http1.1 \
  --data-binary "@bigfile.bin"

Server output:

== Incoming Headers ==
Accept: */*
Connection: keep-alive
Host: REDACTED
User-Agent: curl/8.5.0
Accept-Encoding: gzip, br
Content-Type: application/octet-stream
Content-Length: 0
Cdn-Loop: cloudflare; loops=1
Cf-Connecting-Ip: REDACTED
Cf-Ipcountry: REDACTED
Cf-Ray: 94b63c7b8d6ffd58-LCA
Cf-Visitor: {"scheme":"https"}
Cf-Warp-Tag-Id: REDACTED
X-Forwarded-For: REDACTED
X-Forwarded-Proto: https

Receiving file with Transfer-Encoding: 
File saved to: uploaded_file.bin
File size is: 0

Conclusion: Cloudflared removes the Transfer-Encoding: chunked header and replaces it with Content-Length: 0, completely stripping the request body in the process. This breaks chunked uploads and streaming PATCH requests. This behavior can cause major issues when using cloudflared in front of Docker registries, custom APIs, or any service expecting chunked uploads.

tarasov65536 avatar Jun 06 '25 08:06 tarasov65536

Hi, can you test running the tunnel with http2 protocol to see if it fixes the issue? For that do: cloudflared tunnel --protocol http2 run --token <token>

jcsf avatar Jun 12 '25 15:06 jcsf

@jcsf thank you for the response!

I tested running the tunnel with --protocol http2 as you suggested.

With this setting, everything works correctly. The origin server receives the full body with Transfer-Encoding: chunked, and the file is transferred properly.

Here is the relevant output from my receiving application:

== Incoming Headers ==
Accept: */*
Connection: keep-alive
Host: REDUCTED
User-Agent: curl/8.5.0
Accept-Encoding: gzip, br
Content-Type: application/octet-stream
Transfer-Encoding: chunked
Cdn-Loop: cloudflare; loops=1
Cf-Connecting-Ip: REDUCTED
Cf-Ipcountry: REDUCTED
Cf-Ray: 94eeeaed9ec6d624-LCA
Cf-Visitor: {"scheme":"https"}
Cf-Warp-Tag-Id: b2010a5e-83f5-41a0-bc7c-6ad093b73e8d
X-Forwarded-For: REDUCTED
X-Forwarded-Proto: https

Receiving file with Transfer-Encoding: chunked
File saved to: REDUCTED\uploaded_file.bin
File size is: 45871104

Is this a known limitation of quic support in cloudflared, or should it be working the same as with http2?

tarasov65536 avatar Jun 13 '25 04:06 tarasov65536

Yes, we have in our roadmap to add proper http framing for quic to be able to support things like gRPC and HTTP/3. For now the workaround if you are just using cloudflared for HTTP traffic is to use HTTP/2 protocol.

jcsf avatar Jun 16 '25 14:06 jcsf

This is an issue that can silently cause a lot of problems (ask me how I know). There should be a more prominent warning somewhere about using protocol auto.

jeremy-morren avatar Jun 26 '25 02:06 jeremy-morren

I'm trying to deploy Harbor in a cluster with ingress exposed via a cloudflared deployment, with Cloudflare proxying the domain through the tunnel.

First hit came from https://github.com/goharbor/harbor/issues/22063 because I'm attempting to use R2 as the back-end.

After rebuilding Harbor with fixes that address that issue, I am now hit with this issue instead. And at this point I also believe that transport is the culprit.

I attempted the HTTP/2 work-around as follows:

  config = {
    ingress = [
      {
        service  = "https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:443"
        origin_request = {
          http2_origin = true
          disable_chunked_encoding = true
          no_tls_verify = true
        }
      },
    ],
    origin_request = {
      http2_origin = true
      disable_chunked_encoding = true
      no_tls_verify = true
    }
    warp_routing = {
      enabled = false
    }
  }

Configuration is applied to the tunnels:

2025-07-11T21:22:33Z INF Updated to new configuration config="{\"ingress\":[{\"originRequest\":{\"disableChunkedEncoding\":true,\"http2Origin\":true},\"service\":\"http://ingress-nginx-controller.ingress-nginx.svc.cluster.local:80\"}],\"originRequest\":{\"disableChunkedEncoding\":true,\"http2Origin\":true},\"warp-routing\":{\"enabled\":false}}" version=10

The end result is still the same as quoted on the community forum:

err.code="digest invalid"
err.detail="invalid digest for referenced layer: sha256:c26fd3797818a687b145a3ded2d34f685ca775852af9adac4c2fc3d7f98dd628, content does not match digest"
err.message="provided digest did not match uploaded content"

Am I missing anything else?

wizardist avatar Jul 11 '25 21:07 wizardist

Apologies, I incorrectly conflated the HTTP/2 to Origin option with the --protocol http2 argument for the tunnel. It worked with the updated deployment.

wizardist avatar Jul 11 '25 22:07 wizardist

Set ingress option http2Origin: false and --protocol http2 argument for the tunnel worked for me.

lohazo avatar Aug 14 '25 09:08 lohazo

below worked for me as well::

cloudflared tunnel --protocol http2 --config /etc/cloudflared/config.yml run

cloudflared: image: cloudflare/cloudflared command: tunnel --protocol http2 --config /etc/cloudflared/config.yml run volumes: - ./cloudflared:/etc/cloudflared

mamedshahmaliyev avatar Sep 12 '25 11:09 mamedshahmaliyev

I am hitting this. For debugging purposes: I use local configuration. My coworkers using the UI tunnel configuration don't seem to see any issues.

Also worth noting is that it's a race condition. Sometimes it works, sometimes it doesn't. Probably depends on how fast the request is received.

The call is being made from a NextJS backend using fetch(). I don't see a way I can omit the transfer-encoding: chunked header. So there is no workaround except (possibly) to switch to the UI configuration. I need to test that.

Tried finding a smoking gun in the cloudflared code, but couldn't. One thing I don't know is whether the issue is happening somewhere in Cloudflare's SaaS or whether the problem is in the local cloudflared tunnel running on my Mac. Not sure how to debug that.

I attempted to set

    originRequest:
      disableChunkedEncoding: true

But it makes no difference - the error still happens and it's still racy.

A test I ran

I set up a simple server in Node:

import { createServer } from 'http';

function bodyToString(body) {
    return new Promise((resolve, reject) => {
        let data = '';
        body.on('data', chunk => {
            data += chunk;
        });
        body.on('end', () => {
            resolve(data);
        });
        body.on('error', reject);
    });
}

createServer((req, res) => {
    bodyToString(req)
        .then(body => {
            console.log({
                method: req.method,
                url: req.url,
                headers: req.headers,
                body
            });
            res.writeHead(200);
            res.end('Hello World');
        });
}).listen(1155);

I configured my tunnel:

url: http://localhost:8000
tunnel: MY TUNNEL UUID
credentials-file: /path/to/UUID.json
ingress:
  - hostname: MY TUNNEL HOST
    service: http://localhost:1155

And I began trigger requests to the endpoint. Some failed:

{
  method: 'POST',
  url: '/path',
  headers: {
    host: 'MY TUNNEL HOST',
    'user-agent': 'Next.js Middleware',
    'content-length': '0',
    accept: '*/*',
    'accept-encoding': 'gzip, br',
    'accept-language': '*',
    baggage: 'sentry-environment=development,sentry-public_key=07846d395886c4441c13ef8ae070e8c9,sentry-trace_id=b1dc5c9e7b0c4b3c9bf26401a2f5f85f, sentry-environment=development,sentry-public_key=07846d395886c4441c13ef8ae070e8c9,sentry-trace_id=475de8ed5ffc6f17a3d7897633799d9e,sentry-sample_rate=1,sentry-transaction=next%20server%20handler,sentry-sampled=true',
    'cdn-loop': 'cloudflare; loops=1',
    'cf-connecting-ip': 'SAME IPv4 AS BELOW',
    'cf-ipcountry': 'AU',
    'cf-ray': '97ed9defcef98647-PER',
    'cf-visitor': '{"scheme":"https"}',
    'cf-warp-tag-id': 'b9ad331b-20ac-4256-b3ee-220140c61ccc',
    connection: 'keep-alive',
    'content-type': 'application/json',
    'sec-fetch-mode': 'cors',
    'sentry-trace': 'b1dc5c9e7b0c4b3c9bf26401a2f5f85f-b121fc0270e67e92, 475de8ed5ffc6f17a3d7897633799d9e-cc6a3f3827f60dc0-1',
    'x-forwarded-for': 'SAME IPv4 AS BELOW',
    'x-forwarded-proto': 'https'
  },
  body: ''
}

And some succeeded:

{
  method: 'POST',
  url: '/path',
  headers: {
    host: 'MY TUNNEL HOST',
    'user-agent': 'Next.js Middleware',
    'content-length': '117',
    accept: '*/*',
    'accept-encoding': 'gzip, br',
    'accept-language': '*',
    baggage: 'sentry-environment=development,sentry-public_key=07846d395886c4441c13ef8ae070e8c9,sentry-trace_id=b1dc5c9e7b0c4b3c9bf26401a2f5f85f, sentry-environment=development,sentry-public_key=07846d395886c4441c13ef8ae070e8c9,sentry-trace_id=0fc8ac60ca4144f3118bbfa8becad4ca,sentry-sample_rate=1,sentry-transaction=next%20server%20handler,sentry-sampled=true',
    'cdn-loop': 'cloudflare; loops=1',
    'cf-connecting-ip': 'SAME IPv4 AS ABOVE',
    'cf-ipcountry': 'AU',
    'cf-ray': '97ed9e3bfb06c742-PER',
    'cf-visitor': '{"scheme":"https"}',
    'cf-warp-tag-id': 'b9ad331b-20ac-4256-b3ee-220140c61ccc',
    connection: 'keep-alive',
    'content-type': 'application/json',
    'sec-fetch-mode': 'cors',
    'sentry-trace': 'b1dc5c9e7b0c4b3c9bf26401a2f5f85f-b121fc0270e67e92, 0fc8ac60ca4144f3118bbfa8becad4ca-83425517bebdabe9-1',
    'x-forwarded-for': 'SAME IPv4 AS ABOVE,
    'x-forwarded-proto': 'https'
  },
  body: '{"realbody":"received, approximately 200 bytes in the body"}'
}


When I hit localhost:1155 directly with the same request, they all succeeded.

hitsthings avatar Sep 14 '25 01:09 hitsthings

We have given up waiting for this to be fixed, and have modified our client to not hit the problem in the first place. Fix has been in place a few months now and has been 100% reliable (1000s of calls at least per day).

	public static async Task<HttpResponseMessage> PostAsJsonWithExplicitLengthAsync<T>(
		this HttpClient client,
		string requestUri,
		T value,
		JsonSerializerOptions options,
		CancellationToken cancellationToken = default)
	{
		// serialize upfront so we know length
		var jsonString = JsonSerializer.Serialize(value, options);
		var content = new StringContent(jsonString, Encoding.UTF8, System.Net.Mime.MediaTypeNames.Application.Json);

		return await client.PostAsync(requestUri, content, cancellationToken);
	}

Effectively we were before using .NET HttpClient helpers that try and stream a JSON object as a POST request and as such was not setting Content-Length and was trying to chunk.

We've just given up on that being a possibility so are now using the above to force buffering of requests before they're sent.

Doesn't make it right - but posting just in case that helps anyone.

kieranbenton avatar Sep 14 '25 15:09 kieranbenton

Encountered this as well - either adding content-length header or using --protocol http2 worked for me.

This is an issue that can silently cause a lot of problems (ask me how I know). There should be a more prominent warning somewhere about using protocol auto.

This was a very unexpected and insidious bug - I second this, until a fix for this is merged there should be some warning/troubleshooting section somewhere (unless there is and I missed it!)

msfstef avatar Sep 24 '25 15:09 msfstef

4 month ago... Why the priority is normal? It's a P0 level bug

iseki0 avatar Oct 10 '25 10:10 iseki0

Hi.

Would you please try again to see if the problem went away.

We belive that we found the reason why this new behavior happened and made a release in our edge that should have addressed it.

If the problem persist please send us your zoneid and tunnelid and logs.

Thanks.

joliveirinha avatar Oct 15 '25 08:10 joliveirinha

This has been fixed. The root cause was a combination of a planned core proxy software upgrade and a latent bug in cloudflared. A new, more performant version of Cloudflare's core proxy software, which has been rolling out for several months, changed the default behavior for handling file uploads.

This change exposed a pre-existing bug in the cloudflared software's QUIC protocol implementation that caused cloudflared to incorrectly process streaming HTTP requests that lack a Content-Length header, assuming a zero-length body. The previous proxy software masked this bug by buffering all uploads by default, which ensured that origin services always received requests with a known Content-Length.

nikitacano avatar Oct 27 '25 10:10 nikitacano

@nikitacano thanks for reporting back.

Can confirm for the Harbor / Registry v2 use case (Internet -> Cloudflare -> cloudflared -> k8s -> Harbor/Registry) -- it works now without resorting to mad solutions like swapping Registry to an unsupported custom v3 build. (https://github.com/goharbor/harbor/issues/22063)

wizardist avatar Nov 19 '25 21:11 wizardist