aws-load-balancer-controller icon indicating copy to clipboard operation
aws-load-balancer-controller copied to clipboard

Document zero-downtime deployment for IP targets

Open kishorj opened this issue 4 years ago • 36 comments

Is your feature request related to a problem? Document setting up zero-downtime deployment with AWS Load balancer controller.

Describe the solution you'd like A documentation with the detailed steps.

Describe alternatives you've considered N/A

kishorj avatar Jul 21 '21 22:07 kishorj

/kind documentation

kishorj avatar Jul 21 '21 22:07 kishorj

@kishorj Is there a timeline you're targeting to document how to achieve zero-downtime deployments? If not, could you please give some pointers on how this can be achieved?

Looking at the related issues filed, the solutions mostly are around adding a sleep in preStop step. I'd really appreciate if you could share your recommendation.

shubham391 avatar Aug 16 '21 15:08 shubham391

Found this in documentation: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/pod_readiness_gate

This talks about a deploy scenario where service can have an outage. Will give this a try today and see if it solves for my case.

shubham391 avatar Aug 17 '21 06:08 shubham391

Enabling Pod Readiness Gate reduced the 5xx errors, but did not completely eliminate them.

Found this issue https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1719#issuecomment-743437832 where @M00nF1sh has explained the breakup of things to consider while deciding the preStop sleep value. After setting an appropriate value in preStop, I'm able to deploy without any errors.

It was also suggested in one of the issues to enable Graceful Shutdown in the server, but I found that if the preStop sleep is high enough, not doing graceful shutdown is also fine since the pod will get fully deregistered from LB during the sleep phase itself. So by the time server receives TERM signal, LB itself would've stopped sending new requests to the pod (and in-flight requests would have also got over). But still good to enable it in case there are any other edge cases.

shubham391 avatar Aug 24 '21 08:08 shubham391

I did create an article about this a while back. https://aws.plainenglish.io/6-tips-to-improve-availability-with-aws-load-balancers-and-kubernetes-ad8d4d1c0f61 Essentially the steps are:

  1. Handle Shutdown Gracefully
  2. Calibrate Your Timings
  3. Add Pod Anti-Affinity to your Deployment
  4. Use Pod-Readiness Gates
  5. Use The AWS Load Balancer Controller Directly (no Nginx controller or Haproxy controller)
  6. Monitor and Measure Everything!
  7. Use PodDisruptionBudget's I would be curious if anyone else has any additional tips.

keperry avatar Aug 26 '21 15:08 keperry

@keperry Thanks for sharing, that was very helpful.

shubham391 avatar Aug 27 '21 14:08 shubham391

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 25 '21 14:11 k8s-triage-robot

/remove-lifecycle stale

project0 avatar Dec 08 '21 09:12 project0

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 08 '22 09:03 k8s-triage-robot

/remove-lifecycle stale

project0 avatar Mar 08 '22 10:03 project0

I haven't got it working yet. Just a simply replacement of the pod (for example change from image: nginx to image: httpd) still causes some connections to drop.

---
apiVersion: v1
kind: Namespace
metadata:
  name: test-nlb-ip
  labels:
    # https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/deploy/pod_readiness_gate/
    elbv2.k8s.aws/pod-readiness-gate-inject: enabled
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: test-nlb-ip
  labels:
    run: my-nginx
  annotations:
    external-dns.alpha.kubernetes.io/hostname: nginx.test.REPLACE_ME.com
    service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=30,deregistration_delay.connection_termination.enabled=true,preserve_client_ip.enabled=true
    service.beta.kubernetes.io/aws-load-balancer-internal: "false"
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-type: "external"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
    # service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
    # service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
    # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    # service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
spec:
  # externalTrafficPolicy: Local
  # externalTrafficPolicy: Cluster
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  namespace: test-nlb-ip
spec:
  strategy:
    rollingUpdate:
      maxUnavailable: "33%"
  selector:
    matchLabels:
      run: my-nginx
  replicas: 5
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: my-nginx
        image: httpd
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh", "-c", "sleep 60"]
        ports:
        - name: http
          containerPort: 80
        readinessProbe:
          httpGet:
            path: /
            port: http
          failureThreshold: 1
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /
            port: http
          failureThreshold: 1
          periodSeconds: 10
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: my-nginx-pdb
spec:
  maxUnavailable: 33%
  selector:
    matchLabels:
      run: my-nginx

Testing with

import requests
from time import sleep, time

hits = 0
miss = 0
average_rtt = 0
count = 0
while True:
    try:
        start = time()
        response = requests.get("http://nginx.test.REPLACE_ME.com/", timeout=10)
        # response = requests.get("http://localhost:8080/", timeout=10)
        end = time()
        millseconds = (end - start) * 1000
        average_rtt = (average_rtt * count + millseconds) / (count + 1)
        count += 1
        if count > 25:
            count = 25
    except:
        response = None
    if response and response.status_code == 200:
        hits += 1
    else:
        miss += 1
    print(f"hits: {hits} misses: {miss} avg rtt: {int(average_rtt)} ms")

Version 2.4, EKS 1.20

sjmiller609 avatar Apr 12 '22 18:04 sjmiller609

@sjmiller609 - are you signalling to the readiness probe to no longer take traffic by throwing a 500 during the "shutdown wait" period? I can't quite tell if your app is doing that. It looks like the "sleep" is handling the "shutdown wait", but if nothing signals to readiness probe (readiness probe must fail), kube will keep sending traffic there. Additionally, I would explicitly set your timeout for readiness probe.

keperry avatar Apr 12 '22 18:04 keperry

Thanks, I think this is what I'm missing. I will give this a shot right now!

sjmiller609 avatar Apr 12 '22 18:04 sjmiller609

I'm giving this a go, but i'm not sure it's quite right because I think you are saying the workload should continue serving regular traffic, just not the readiness probe

        lifecycle:
          preStop:
            exec:
              command:
              - "/bin/sh"
              - "-c"
              - |
                  nginx -c /etc/nginx/nginx.conf -s quit
                  while pgrep -x nginx; do
                    sleep 1
                  done

sjmiller609 avatar Apr 12 '22 18:04 sjmiller609

Since I will have to work out details in the workload, I will replace my demo service by my actual ingress controller and then report back.

sjmiller609 avatar Apr 12 '22 19:04 sjmiller609

I think the intended order of events is:

  • New pods launched
  • Update policy allows all to be launched at the same time
  • Readiness gate applicable to new pods
  • Waiting on initial setup with NLB
  • NLB initial setup ready
  • Pods ready because pass readiness gate
  • Old pods marked as terminating, triggered by the other pods being ready
  • Drain starts on NLB immediately
  • Prestop hook is executed, sleep 180 seconds
    • This is to avoid the limitation NLB may continue to send traffic for up to 180 seconds to a draining target
  • Drain completes before 180 seconds
  • Prestop hook done sleeping
  • SIGTERM sent to pod
  • terminationDrainDuration applied
    • Istio-specific concept
  • 10s for any remaining connections to close and existing connections are force closed by istio
  • NLB will reach deregistration delay after total of 300 seconds
  • NLB will close any remaining connections

It seems like in my case, my workload can just sleep for 180 seconds, and doesn't need to be customized for the readiness probe. It's just about waiting long enough to satisfy the limitation of the AWS NLB.

If the deregistered target stays healthy and an existing connection is not idle, the load balancer can continue to send traffic to the target. To ensure that existing connections are closed, you can do one of the following: enable the target group attribute for connection termination, ensure that the instance is unhealthy before you deregister it, or periodically close client connections.

I'm trying to understand the purpose of @keperry 's suggestion, and I am guessing the reasoning is that by setting readiness to fail, then AWS LB controller will then mark the target as unhealthy (not sure?). Then this satisfies the condition in the above quote to "ensure that the instance is unhealthy before you deregister it"

References:

  • https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
  • https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#deregistration-delay
  • https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/

Other notes:

  • I was finding "random" latency spikes that I was having trouble working out. My monitoring script was confusing me because I was being rate limited by my DNS. To fix, hardcode in /etc/hosts your NLB while running the testing script.

I will post my manifests below that I used to get it working in my case.

sjmiller609 avatar Apr 12 '22 22:04 sjmiller609

Not shown:

  • Install Istio Operator

The below manifests were working in my test to run the monitoring script and do a "kubectl rollout restart deployments -n istio-system". I think they are not the minimal configuration.

Istio configuration:

---
apiVersion: v1
kind: Namespace
metadata:
  name: istio-system
  labels:
    # https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/deploy/pod_readiness_gate/
    elbv2.k8s.aws/pod-readiness-gate-inject: enabled
---
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio-default
  namespace: istio-system
spec:
  meshConfig:
    defaultConfig:
      # The amount of time allowed for connections to complete on proxy shutdown.
      # On receiving SIGTERM or SIGINT, istio-agent tells the active Envoy to
      # start draining, preventing any new connections and allowing existing
      # connections to complete. It then sleeps for the
      # termination_drain_duration and then kills any remaining active
      # Envoy processes. If not set, a default of 5s will be applied.
      #
      # This process will occur after the preStop lifecycle hook.
      # https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
      terminationDrainDuration: 10s
  components:
    ingressGateways:
    - enabled: true
      k8s:
        overlays:
        - kind: Deployment
          name: istio-public-ingressgateway
          patches:
          - path: spec.template.spec.containers[name:istio-proxy].lifecycle.preStop.exec.command
            # NLB may continue routing traffic for up to 180 seconds after
            # the endpoint is marked as 'draining' in the NLB.
            # We sleep before initiating shutdown to allow NLB connections
            # to stop coming to the container.
            value:
              - "/bin/sh"
              - "-c"
              - "sleep 180"
          - path: spec.template.spec.terminationGracePeriodSeconds
            # We allow the preStop sleep duration, plus the
            # terminationDrainDuration, plus 10 seconds to terminate.
            value: 200
        podDisruptionBudget:
          maxUnavailable: 33%
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 0
        hpaSpec:
          minReplicas: 5
          maxReplicas: 10
        service:
          # Don't configure this section for a real cluster,
          # this configuration present to dodge need of HTTPS,
          # since AWS LB controller will inject pod readiness gates
          # for each port on the service.
          ports:
          - name: http2
            port: 80
            protocol: TCP
            targetPort: 8080
        # service:
        #   externalTrafficPolicy: Local
        serviceAnnotations:
          external-dns.alpha.kubernetes.io/hostname: ha.test.REPLACE_ME.com
          service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=200,deregistration_delay.connection_termination.enabled=true,preserve_client_ip.enabled=true
          service.beta.kubernetes.io/aws-load-balancer-internal: "false"
          service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
          service.beta.kubernetes.io/aws-load-balancer-type: "external"
          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
          service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
          service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
          # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          # service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      name: istio-public-ingressgateway
    - enabled: false
      name: istio-ingressgateway
  hub: gcr.io/istio-release
  profile: default

Configuration of istio

---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: istio-public-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "ha.test.REPLACE_ME.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: my-nginx
  namespace: istio-system
spec:
  hosts:
  - "ha.test.REPLACE_ME.com"
  gateways:
  - istio-public-gateway
  http:
  - route:
    - destination:
        host: my-nginx.test-nlb-ip.svc.cluster.local

Nginx

---
apiVersion: v1
kind: Namespace
metadata:
  name: test-nlb-ip
  labels:
    istio-injection: enabled
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  namespace: test-nlb-ip
  labels:
    run: my-nginx
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  namespace: test-nlb-ip
spec:
  strategy:
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 0
  selector:
    matchLabels:
      run: my-nginx
  replicas: 5
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        lifecycle:
          preStop:
            exec:
              command:
              - "/bin/sh"
              - "-c"
              - |
                  nginx -c /etc/nginx/nginx.conf -s quit
                  while pgrep -x nginx; do
                    sleep 1
                  done
                  echo "done"
        ports:
        - name: http
          containerPort: 80
        readinessProbe:
          httpGet:
            path: /
            port: http
          failureThreshold: 2
          timeoutSeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /
            port: http
          failureThreshold: 2
          timeoutSeconds: 5
          periodSeconds: 10
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: my-nginx-pdb
  namespace: test-nlb-ip
spec:
  maxUnavailable: 33%
  selector:
    matchLabels:
      run: my-nginx

sjmiller609 avatar Apr 12 '22 22:04 sjmiller609

@sjmiller609 tldr; checkout this workaround: https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1719#issuecomment-763801825

project0 avatar Apr 13 '22 06:04 project0

Update, this configuration has been working perfectly for a few weeks:

apiVersion: v1
kind: Namespace
metadata:
  name: istio-config
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    elbv2.k8s.aws/pod-readiness-gate-inject: enabled
  name: istio-system
---
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio-default
  namespace: istio-system
spec:
  components:
    ingressGateways:
    - enabled: true
      k8s:
        hpaSpec:
          maxReplicas: 15
          minReplicas: 5
        nodeSelector:
          spotinst.io/node-lifecycle: od
        overlays:
        - kind: Deployment
          name: istio-public-ingressgateway
          patches:
          - path: spec.template.spec.containers[name:istio-proxy].lifecycle.preStop.exec.command
            value:
            - /bin/sh
            - -c
            - sleep 180
          - path: spec.template.spec.terminationGracePeriodSeconds
            value: 200
          - path: spec.template.metadata.labels.spotinst\.io/restrict-scale-down
            value: "true"
        podAnnotations:
          ad.datadoghq.com/tags: '{"source": "envoy", "service": "istio-public-ingressgateway"}'
        podDisruptionBudget:
          maxUnavailable: 20%
        serviceAnnotations:
          external-dns.alpha.kubernetes.io/hostname: platform.getcerebral.com,portal.getcerebral.com
          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
          service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
          service.beta.kubernetes.io/aws-load-balancer-internal: "false"
          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
          service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
          service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=200,deregistration_delay.connection_termination.enabled=true,preserve_client_ip.enabled=false
          service.beta.kubernetes.io/aws-load-balancer-type: external
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 0
      name: istio-public-ingressgateway
    - enabled: false
      name: istio-ingressgateway
    pilot:
      k8s:
        hpaSpec:
          maxReplicas: 10
          minReplicas: 3
        resources:
          limits:
            cpu: 2000m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 2Gi
        serviceAnnotations:
          ad.datadoghq.com/endpoints.check_names: '["istio"]'
          ad.datadoghq.com/endpoints.init_configs: '[{}]'
          ad.datadoghq.com/endpoints.instances: |
            [
              {
                "istiod_endpoint": "http://%%host%%:15014/metrics",
                "use_openmetrics": true
              }
            ]
  hub: gcr.io/istio-release
  meshConfig:
    accessLogFile: /dev/stdout
    defaultConfig:
      terminationDrainDuration: 10s
    extensionProviders:
    - envoyExtAuthzHttp:
        headersToDownstreamOnDeny:
        - uid
        - client
        - access-token
        headersToUpstreamOnAllow:
        - uid
        - client
        - access-token
        includeHeadersInCheck:
        - uid
        - client
        - access-token
        pathPrefix: /api/v1/auth/istio
        port: "80"
        service: auth-service.apps.svc.cluster.local
      name: auth-service
  profile: default

sjmiller609 avatar Apr 29 '22 20:04 sjmiller609

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 28 '22 21:07 k8s-triage-robot

/remove-lifecycle stale

project0 avatar Jul 29 '22 22:07 project0

@shubham391 : I am implementing the same in my kubernetes cluster but unable to calculate the sleep time for prestop hook and terminationGracePeriodSeconds. Currently terminationGracePeriodSeconds is 120 seconds, deregistration delay is 300 seconds.Do we have any mechanism to calculate this?

jyotibhanot avatar Aug 12 '22 13:08 jyotibhanot

To fix this issue when using Istio + NLB (IP Tragets), here are the working defaults.

Ingress gateway Deployment-

terminationGracePeriodSeconds: 300
podAnnotations:
  proxy.istio.io/config: |
    drainDuration: 300s
    parentShutdownDuration: 301s
    terminationDrainDuration: 302s

hariomsaini avatar Aug 26 '22 10:08 hariomsaini

Just to check my reading of the docs (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#deregistration-delay):

When you deregister a target, the load balancer stops creating new connections to the target

If that's correct, then implies to me that there is only a small amount of time in which the load balancer will send the old pod new connections: From the point at which graceful termination is initiated by kubernetes, to the point at which the load balancer is informed by aws-load-balancer-controller to deregister the target.

The kubernetes docs state that in graceful termination, a SIGTERM is sent, which a pod must respond to. Some may implement their own draining logic. If that logic includes preventing new connections, then we may hit our window above, where the load balancer still thinks the old pod is "good", and sends a new connection, but the old pod rejects it, causing an error to ripple back up to the client.

It feels like an ideal solution would somehow prevent pod termination (perhaps via the pre-stop hook, doing a similar thing as the pod readiness gate) until the load balancer had confirmed that the old pod target was "draining". From that point, existing connections could be handled with the standard SIGTERM and terminationGracePeriodSeconds from https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination.

chalford avatar Oct 24 '22 11:10 chalford

I just wrote this to mitigate this https://github.com/matti/k8s-prestop-sidecar

Interestingly when NLB starts draining it first stops the health checks, but will send traffic for about 30s to target which is no longer health checked.

matti avatar Dec 30 '22 13:12 matti

@sjmiller609 Thank you very much, this helped us a lot, enabling a zero downtime rolling update with Istio 1.17.

For the fellow readers, I would like to point out that with the latest Istio version, you can use the following environment variables on the gateway pods instead of the sleep command:

MINIMUM_DRAIN_DURATION: "180s"
EXIT_ON_ZERO_ACTIVE_CONNECTIONS: "true"

This also allows you to use the distroless variants, where sleep is not available.

woehrl01 avatar Mar 22 '23 21:03 woehrl01

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 20 '23 22:06 k8s-triage-robot

/remove-lifecycle stale

Constantin07 avatar Jun 21 '23 06:06 Constantin07

Any way to define terminationGracePeriodSeconds on helm installation ?

luisiturrios1 avatar Jul 25 '23 01:07 luisiturrios1

unfortunately not, we used kustomize on top of helm (macgyver solution?)

dvbthijsvnuland avatar Aug 16 '23 09:08 dvbthijsvnuland