contour icon indicating copy to clipboard operation
contour copied to clipboard

To proxy 100-continue http status code to pods

Open stasmir opened this issue 2 years ago • 6 comments

We are using contour in our k8s clusters. And I noticed that if a client sends HTTP header "Expect: 100-continue" to a backend pod then the load balancer automatically responds to client with 100-continue status code before reaching to a pod. And a pod even doesn't receive the Expect header.

I found an option in Envoy docs: https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto.html

proxy_100_continue (bool) If proxy_100_continue is true, Envoy will proxy incoming “Expect: 100-continue” headers upstream, and forward “100 Continue” responses downstream. If this is false or not set, Envoy will instead strip the “Expect: 100-continue” header, and send a “100 Continue” response itself.

The reason for this feature is that our clients should handle 100-continue to start sending data to backend.

Let me briefly describe it. A client can send a request to backend containing a large data stream, that cannot be reiterated. At the backend the request can be placed in a queue waiting for a 100-continue code to actually send data. In case of heavy load at the backend the request can be timed out with 503 status code with Retry-After header. In this case the client can use the same data stream for a new request. Without 100-continue status code, backend reads data from the client stream while putting the request into the queue. That means the client cannot reuse this data stream for a retry.

stasmir avatar Feb 09 '23 17:02 stasmir