contour
contour copied to clipboard
Overload Manager - Max Global Downstream Connections
Part of the edge best practices for running envoy, in addition to the max-heap resource monitors, there's a global connection limit set, https://www.envoyproxy.io/docs/envoy/latest/configuration/best_practices/edge#best-practices-edge
layered_runtime: layers: - name: static_layer_0 static_layer: envoy: resource_limits: listener: example_listener_name: connection_limit: 10000 overload: global_downstream_max_connections: 50000
This is particularly helpful for one of our cases where the connections came in so rapidly that the heap resource monitor based actions couldn't kick in and envoy ended up in a death loop.
Adding the limit itself is not particularly invasive, however, an extension of that our requirements was to make it possible to fail readiness checks, but keep passing liveness checks which leads to some extra listener configuration requirements. The way I've addressed this is by using the ignore_global_conn_limit on the configured stats admin listeners. By default all listeners ignore the connection limit, but we add an optional configuration to setup a listener that has the flag set to false.
The Draft PR I've opened (https://github.com/projectcontour/contour/pull/6308) does a minimal configuration update, but it it makes for a very cumbersome interactions between the serve-context config, the contour config CRD, and how it's passed to the underlying internal envoy package. Before I make any other changes (and add more tests, etc.) I'd like to get some feedback on a better path forward re: configuration.
My proposal would be to shift the configuration of the listeners up to the serve command, which would be responsible for constructing the listener configurations from it's parameters, effectively moving the core branch logic out of v3.StatsListeners to main.doServe. I see an added benefit of reducing how deep the contour apis get passed into the internal components and it keeps the configuration isolated to the serve command instead of having it distributed to the inner parts. I suspect this may simplify some of the test configuration as well so we don't have tor rely on assumptions about the listeners returned (eg, taking the first listener https://github.com/projectcontour/contour/blob/main/internal/featuretests/v3/envoy.go#L597 and taking that for granted in the discovery responses)
I wrote a proof of concept change to illustrate what that might look like, which if reasonable will implement in my draft (https://github.com/projectcontour/contour/pull/6308) https://github.com/projectcontour/contour/compare/main...seth-epps:options-listener-config?expand=1
cc @projectcontour/maintainers
My proposal would be to shift the configuration of the listeners up to the serve command
An issue with this is Gateway API Gateway reconciliation, Listeners can be dynamically changed without a contour restart/recreate so they do need to be generated similarly to other resources
My proposal would be to shift the configuration of the listeners up to the serve command
An issue with this is Gateway API Gateway reconciliation, Listeners can be dynamically changed without a contour restart/recreate so they do need to be generated similarly to other resources
ah i guess this is just concerning the admin/operator facing Listeners so maybe disregard this point
To summarize some of the context from here and what we have available today:
- we do have per-Listener downstream connection limits available, but these apply only to the HTTP/HTTPS listeners for proxying app traffic, not stats/health/admin Listeners: https://github.com/projectcontour/contour/blob/a485abb0a595c55f03239451cf0731f1d4fdf86f/apis/projectcontour/v1alpha1/contourconfig.go#L436
- a global connection limit would be enforced in tandem with per-Listener limits, i.e. you can set a per-Listener limit lower than the global limit
- having a Listener ignore the global limit means their downstream connections still count against the limit but connections to that Listener are not limited
- in the original issue content we have an example configuring the global connection limit via runtime key, but that method is deprecated so we need to configure it via bootstrap overload manager resource monitor
It makes sense to me to leave the stats endpoints to ignore the global limit, we still want to be able to serve readiness checks and stats even when envoy is overloaded. the admin Listener (listens on a Unix domain socket) should also be made to ignore the global limit as well.
I think in general the linked PR looks like it should work just fine, only thing is on the readiness/liveness check note from above:
an extension of that our requirements was to make it possible to fail readiness checks, but keep passing liveness checks
at the moment the base install YAMLs don't include a liveness check, only a readiness check (see: https://github.com/projectcontour/contour/blob/a485abb0a595c55f03239451cf0731f1d4fdf86f/examples/contour/03-envoy.yaml#L76-L79)
are you all adding your own liveness check? if so it would be interesting to see how that is configured and how that has worked out
given we only have a readiness check at the moment, if i'm understanding correctly we could just set that existing Listener to respect the global connection limit so that instances that are overloaded are taken out of the ready Service endpoints but not shut down
the admin/stats Listeners should always ignore the global limit and seems valid to have the readiness probe optionally respect the global connection Limit (or always respect it, given this will only matter if you actuall set a limit)
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack