gateway icon indicating copy to clipboard operation
gateway copied to clipboard

Handle EnvoyProxy Image version upgrades

Open arkodg opened this issue 1 year ago • 13 comments

arkodg avatar Jul 26 '23 20:07 arkodg

I am interested in picking this up :)

cnvergence avatar Aug 01 '23 12:08 cnvergence

thanks @cnvergence ! Thinking out loud, an outcome of this issue could be a E2E where all client requests are successful while 2 replicas of Envoy Proxy are undergoing a rolling restart.

arkodg avatar Aug 01 '23 16:08 arkodg

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

github-actions[bot] avatar Aug 31 '23 20:08 github-actions[bot]

@arkodg coming back to this after a while, could you please point me to where should I start? I did check the upgrade and it seems like it is handled, but I may be wrong.

As for the E2E, I assume I should add a new scenario to the e2e test suite :)

cnvergence avatar Sep 01 '23 16:09 cnvergence

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

github-actions[bot] avatar Oct 01 '23 20:10 github-actions[bot]

hey I know @chauhanshubham was looking into a similar test for control plane upgrades which would invariably also upgrade envoy proxy, should we just collapse those two e2e tests into one where we perform an upgrade with a last known EG minor version, and ensure that

  1. there is no config churn in the data plane during an upgrade
  2. there is no traffic drop in the data plane while this upgrade this happens (assuming we always have 2 replicas of control plane and data plane running)

arkodg avatar Oct 07 '23 01:10 arkodg

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

github-actions[bot] avatar Nov 20 '23 00:11 github-actions[bot]

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

github-actions[bot] avatar Jan 06 '24 20:01 github-actions[bot]

I'm concerned that a hitless in-place upgrade of envoy is not trivial.

A graceful termination of envoy may require:

  • Failing LB/Kubelet probes to stop new connection from being established to terminating pods
  • Triggering envoy to drain listeners
  • Delaying pod termination until all connections are terminated
  • Additional factors to consider:
    • IaaS LBs behave differently in response to targets becoming unhealthy (e.g. AWS will reset all existing connections, while GCP/Azure will stop establishing new connections but retain existing connections) and have varying levels of configurability for HCs.
    • The number of envoy pods per node and LB's ExternalTrafficPolicy impact the correctness of LB HCs.

It's also important to avoid race conditions where a new instance of envoy is receiving traffic before it was configured (e.g. due to order of component restart, failures in new control plane version, etc.).

Some prior art:

  • Contour: https://github.com/projectcontour/contour/blob/main/design/envoy-shutdown.md
  • Gloo Edge: https://docs.solo.io/gloo-edge/latest/operations/advanced/zero-downtime-gateway-rollout/
  • Contour: https://www.reddit.com/r/RedditEng/comments/1aqsxqf/proper_envoy_shutdown_in_a_kubernetes_world

guydc avatar Feb 14 '24 14:02 guydc

@arkodg

I executed a naive test:

  • Environment: kind, metallb, EG quickstart.yaml
  • envoy proxy replicas: 2
  • upgrade: 0.6.0 => 0.0.0-latest using helm upgrade
  • load simulation during upgrade: hey -c 100 -q 10 -z 300s -host www.example.com http://172.18.255.200/

The upgrade caused some client-facing failures during the test:

Error distribution:
  [8]	Get "http://172.18.255.200/": EOF
  [32]	Get "http://172.18.255.200/": dial tcp 172.18.255.200:80: connect: connection refused
  [1]	Get "http://172.18.255.200/": read tcp 172.18.0.1:55220->172.18.255.200:80: read: connection reset by peer
  [1]	Get "http://172.18.255.200/": read tcp 172.18.0.1:55260->172.18.255.200:80: read: connection reset by peer

It's probably possible to tune some of the parameters mentioned in my previous comment to achieve a hitless upgrade under certain test conditions (RPS, connection reuse, HTTP version, ...). But, I'm not sure that we can claim to have a hitless upgrade in general, based on such test.

So, I propose that for the GA scope, we focus on an upgrade test that ensures request convergence to successful execution after the upgrade. A limited hitless upgrade test can be a stretch-goal.

In the future, we can explore:

  • Implementing a graceful envoy shutdown feature and providing guidance on configuring envoy for hitless in-place upgrades
  • Supporting canary deployments

WDYT?

guydc avatar Feb 14 '24 20:02 guydc

hey @guydc I was hoping we could have some test for hitless upgrade in v1.0, with caveats, that can hopefully we removed over time post GA do agree, we can split this up, and make it a stretch goal for v1.0

arkodg avatar Feb 14 '24 21:02 arkodg

this should be fixed with #2633, keeping this open so that it can be validated with a e2e

arkodg avatar Feb 27 '24 22:02 arkodg

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

github-actions[bot] avatar Apr 27 '24 08:04 github-actions[bot]

fixed with https://github.com/envoyproxy/gateway/pull/2862

arkodg avatar May 08 '24 21:05 arkodg