contour icon indicating copy to clipboard operation
contour copied to clipboard

Lower the per connection buffer to 64k at the listener level

Open davecheney opened this issue 6 years ago • 2 comments

Envoy assigns a 1Mb buffer to to each incoming connection. It is not clear if it consumes the whole 1Mb in one go, or that is a upper limit.

Either way, I think this should be lowered to something like 64.

https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/lds.proto#envoy-api-field-listener-per-connection-buffer-limit-bytes

davecheney avatar Aug 29 '19 07:08 davecheney

In #1375, this is recommended to be 32k instead. Updating title accordingly.

youngnick avatar Oct 21 '19 03:10 youngnick

Based on https://github.com/projectcontour/contour/pull/1797#pullrequestreview-306974148 there's a good change that this setting doesn't do what I thought it did. Moving to the backlog for more investigation.

davecheney avatar Oct 25 '19 04:10 davecheney

Hi, I am in the process of reviewing the Envoy configuration Contour generates for us.

Looks like one of the differences between Contour Envoy configuration and the suggestions from the Envoy team for running on the edge is the lack of defining this buffer limit for Listener & Cluster.

Searching for this led me to this ticket here.

  1. Is not overriding the Envoy default done intentionally by Contour?
  2. Since this ticket has been around for a while, could anybody tell me what's the latest on this?

Thanks in advance folks!

BerGer23 avatar Mar 20 '23 13:03 BerGer23

@BerGer23 I think it makes sense to look into using Envoy's recommended settings here, but we also need to ensure we understand the impact on existing users of changing the settings. One possible approach here would be to change the default setting, but also expose a tuneable to allow users to modify the setting to something else if needed.

Have you done/are you able to do any testing around changing this setting, and observing any perf or behavior changes?

skriss avatar Mar 21 '23 19:03 skriss

Hi @skriss, sorry about the late answer. I'm afraid I don't have any constructive input on this currently, neither on the performance nor behavioural side - happy to see this is actively worked on though.

BerGer23 avatar May 17 '23 13:05 BerGer23