serving icon indicating copy to clipboard operation
serving copied to clipboard

Readiness probe fails if the container restarts after a liveness probe fail

Open Shashankft9 opened this issue 2 years ago • 18 comments

What version of Knative?

v1.2

Expected Behavior

After a liveness probe fail, the container should restart and ideally should start serving the traffic again, just like how it would happen for a k8s deployment.

Actual Behavior

After the restart from liveness probe fail, the pod starts to fail with readiness probe, and serves no traffic.

Steps to Reproduce the Problem

ksvc:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: healthprobes-go
spec:
  template:
    spec:
      containers:
        - image: shashankft/healthprobes@sha256:cc1cb4323e4d1cdee62e3ae7623ff6a213db72e89c17ad4772c39fe841b98fb9
          env:
            - name: TARGET
              value: "Go Sample v1"
          livenessProbe:
            httpGet:
              path: /healthz/liveness
              port: 0
            periodSeconds: 15
            failureThreshould: 1

the image here is produced from the code here: https://github.com/knative/serving/blob/db4c85b641703f94a700ada8cc074d28b423b5eb/test/test_images/healthprobes/health_probes.go

Hit the /start-failing path

relevant events of the ksvc pod:

  Normal   Created    2m                 kubelet            Created container queue-proxy
  Normal   Started    2m                 kubelet            Started container queue-proxy
  Normal   Killing    60s                kubelet            Container user-container failed liveness probe, will be restarted
  Warning  Unhealthy  60s (x3 over 90s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 500
  Normal   Started    9s (x2 over 2m)    kubelet            Started container user-container
  Normal   Created    9s (x2 over 2m)    kubelet            Created container user-container
  Normal   Pulled     9s                 kubelet            Container image "shashankft/healthprobes@sha256:cc1cb4323e4d1cdee62e3ae7623ff6a213db72e89c17a1b98fb9" already present on machine
  Warning  Unhealthy  0s (x6 over 50s)   kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503

I tried to run a k8s deployment with the same image and liveness probe configuration, and it seemed to behave as I hoped, whenever I explicitly hit the /start-failing path, in some seconds liveness probe will report failure and the container will restart and start serving the traffic again.

Shashankft9 avatar Jan 31 '22 12:01 Shashankft9

@Shashankft9 could you check the kubelet logs about the failure? Reason of request failure should be recorded in there. You need to set log level>=4 btw and restart the kubelet service.

skonto avatar Feb 02 '22 12:02 skonto

I can see the following logs in kubelet:

I0208 01:54:37.906694   12942 http.go:137] Probe failed for http://10.233.65.50:8012/ with request headers map[Accept:[*/*] K-Network-Probe:[queue] User-Agent:[kube-probe/1.20]], response body: shutting down
I0208 01:54:37.906752   12942 prober.go:117] Readiness probe for "healthprobes-go-00001-deployment-55cf484684-dhgct_default(b0570c26-41ea-4156-9367-372f5775415e):queue-proxy" failed (failure): HTTP probe failed with statuscode: 503

Shashankft9 avatar Feb 08 '22 07:02 Shashankft9

@Shashankft9 it seems your are hitting this line: https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/vendor/knative.dev/pkg/network/handlers/drain.go#L102

StatusServiceUnavailable            = 503 // RFC 7231, 6.6.4

Due to the user container restart queue proxy goes in draining mode, as described here:

https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/vendor/knative.dev/pkg/network/handlers/drain.go#L51-L57

In the comments it should be: "503 shutting down". DefaultDrainTimeout = 45s. K8s defaults for readiness probes are here.

".Spec.Containers[0].ReadinessProbe.FailureThreshold": 3

Knative applies some revision defaults but not in this case (in the ksvc above only liveness probes are set):

https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/pkg/apis/serving/v1/revision_defaults.go#L136-L158

One thing to consider although it does not seem to be the case is that:

Caution: Liveness probes do not wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.

What happens if you increase the readiness failure threshold to 5? Right now it seems Kubelet will try for 3*10=30 seconds and then fail which is lower than the draining period. That probably explains what you see in the events: 6 failures and 2 container restarts. My rough guess.

skonto avatar Feb 09 '22 10:02 skonto

hey, thanks for digging in, just to be sure - are you suggesting that I should try changing the readiness failure threshold of the queue-proxy container? I can see what you are saying if so - basically since the 3 checks end before the draining period, queue-proxy keeps on giving 503 for the whole duration of those 3 checks, correct?

Another scenario I tested, might be relevant - this time I added a readiness probe in the ksvc:

          readinessProbe:
            httpGet:
              path: /healthz/readiness
              port: 0
            periodSeconds: 15
            failureThreshould: 5

and I expected the same failure log in kubelet, but I got this:

I0209 08:54:41.457686   12942 http.go:137] Probe failed for http://10.233.65.149:8012/ with request headers map[Accept:[*/*] K-Network-Probe:[queue] User-Agent:[kube-probe/1.20]], response body:
I0209 08:54:41.457730   12942 prober.go:117] Readiness probe for "healthprobes-go-00001-deployment-6877485457-cw7lz_default(2f04f8af-f9aa-4bb9-938e-a2a1d5b8cb28):queue-proxy" failed (failure): HTTP probe failed with statuscode: 503

I will try changing the default queue-proxy readiness probe's failureThreshold in serving code.

Shashankft9 avatar Feb 09 '22 13:02 Shashankft9

@Shashankft9 I meant the user container let me explain though because eventually the issue is different. I debug the scenario with the following settings (using your code):

          readinessProbe:
            httpGet:
              path: /healthz/readiness
              port: 0
            periodSeconds: 15
            failureThreshould: 5
          livenessProbe:
            httpGet:
              path: /healthz/liveness
              port: 0
            periodSeconds: 15
            failureThreshould: 1

Here is what happens after a while:

$ kubectl get pods
NAME                                                READY   STATUS    RESTARTS   AGE
healthprobes-go-00001-deployment-74f6787d8f-sz6mq   1/2     Running   1          3m43s

pod descr

...
  Normal   Scheduled  5m48s                 default-scheduler  Successfully assigned default/healthprobes-go-00001-deployment-74f6787d8f-sz6mq to minikube
  Normal   Pulling    5m46s                 kubelet            Pulling image "shashankft/healthprobes@sha256:cc1cb4323e4d1cdee62e3ae7623ff6a213db72e89c17ad4772c39fe841b98fb9"
  Normal   Pulled     5m28s                 kubelet            Successfully pulled image "shashankft/healthprobes@sha256:cc1cb4323e4d1cdee62e3ae7623ff6a213db72e89c17ad4772c39fe841b98fb9" in 17.522624817s
  Normal   Pulling    5m27s                 kubelet            Pulling image "docker.io/skonto/queue-39be6f1d08a095bd076a71d288d295b6@sha256:c75f19f83cddb57026535d0069689760554e867c32aea20e51850cc9b894fbd1"
  Normal   Pulled     5m15s                 kubelet            Successfully pulled image "docker.io/skonto/queue-39be6f1d08a095bd076a71d288d295b6@sha256:c75f19f83cddb57026535d0069689760554e867c32aea20e51850cc9b894fbd1" in 12.338262103s
  Normal   Created    5m14s                 kubelet            Created container queue-proxy
  Normal   Started    5m14s                 kubelet            Started container queue-proxy
  Normal   Killing    4m33s                 kubelet            Container user-container failed liveness probe, will be restarted
  Warning  Unhealthy  4m33s (x3 over 5m3s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 500
  Normal   Pulled     4m3s                  kubelet            Container image "shashankft/healthprobes@sha256:cc1cb4323e4d1cdee62e3ae7623ff6a213db72e89c17ad4772c39fe841b98fb9" already present on machine
  Normal   Started    4m2s (x2 over 5m27s)  kubelet            Started container user-container
  Normal   Created    4m2s (x2 over 5m28s)  kubelet            Created container user-container
  Warning  Unhealthy  33s (x19 over 5m3s)   kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503

So if you check the link above, implementation makes sure readiness is set only to queue container on behalf of the user container while liveness is set on the user container, although all http probe calls at first go through to queue proxy anyway. This is important because the only container that is restarted is the user container that after the initial call to /start-failing path it will be restarted due to the liveness failure. Once it is restarted liveness check will not fail anymore. However, the queue container is stuck in the draining phase and readiness probes will fail until the pod is unhealthy and eventually it will be removed. Here is why (btw here are the latest entries of the kubelet log):

the drain function will only reset the timer if a normal call is made, otherwise it will keep returning "shutting down" to probes (that cant happen because the unhealthy container cannot be accessed).

https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/vendor/knative.dev/pkg/network/handlers/drain.go#L88-L111

I modified the func above by adding some comments and as you can see from the queue proxy logs it is stuck.

Btw readiness probes are managed by queue proxy which every 50ms will try poll the user container trying the readiness url passed. It respects the readiness probe settings passed and these are passed via an env var to the queue container which is later used to set up the logic of the prober:

      SERVING_READINESS_PROBE:           {"httpGet":{"path":"/healthz/readiness","port":8080,"host":"127.0.0.1","scheme":"HTTP","httpHeaders":[{"name":"K-Kubelet-Probe","value":"queue"}]},"timeoutSeconds":1,"periodSeconds":15,"successThreshold":1,"failureThreshold":3}

/cc @julz is this behavior expected (by design)?

skonto avatar Feb 10 '22 23:02 skonto

@Shashankft9 @julz gentle ping

skonto avatar Feb 21 '22 14:02 skonto

It seems to be the case that https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/vendor/knative.dev/pkg/network/handlers/drain.go#L165-L170 never returns false, since once the d.timer is set, its never being set to nil again and so the queue-proxy is staying in drain mode forever because this condition is true always.

If the idea of drain mode is to be there for some QuietPeriod which by default is 45s, would it make sense to actually add d.timer = nil after the time period has elapsed? after this line: https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/vendor/knative.dev/pkg/network/handlers/drain.go#L129

Shashankft9 avatar Mar 11 '22 07:03 Shashankft9

The Quiet period is overriden at the queue proxy side:

The knative/pkg sets this by default to be 45 here: https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/vendor/knative.dev/pkg/network/handlers/drain.go#L123

However queue main sets that to 30s https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/cmd/queue/main.go#L64 https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/cmd/queue/main.go

Some more context:

The drain call here is what we experience. https://github.com/knative/serving/blob/52f07b02254b42f18fa0f71dbb0462410a5bc7b1/cmd/queue/main.go#L354 Line 16 in the queue proxy logs: https://gist.github.com/skonto/749b5a34b464a786d2d3f473da0453d2 This drain is called because we use preStop handlers on the user container. https://github.com/knative/serving/blob/9fb7372faad456c23274e0a62fc9d15e382c801f/pkg/reconciler/revision/resources/deploy.go#L80 The user container was restarted.

Kubernetes sends the preStop event immediately before the Container is terminated. Kubernetes' management of the Container blocks until the preStop handler completes, unless the Pod's grace period expires. So we have a blocking situation because preStop returns at some point (as described next) actually but does not set the condition to be non draining or timer set to nil.

Now when we drain for the first time we set a timer, that timer is continously being reset if there is any user request in the mean time. This is the key idea: // When the Drainer is told to Drain, it will immediately start to fail // probes with a "500 shutting down", and the call will block until no // requests have been received for QuietPeriod (defaults to // network.DefaultDrainTimeout).

Now if we set that timer to nil after draining is done I suspect that we will be able to move on with readiness probes. The invariant is that pods should finish the requests before shutting down and in our case it should be draining all requests before restarting and not accepting any new before container is ready. I am wondering if there is a design doc for the behavior.

skonto avatar Mar 11 '22 14:03 skonto

@skonto I am running into this issue on knative v1.1.0, wondering if there is a fix WIP.

pramenn avatar Mar 18 '22 17:03 pramenn

@pramenn afaik @Shashankft9 was planning to do a PR to restart the timer as a candidate fix.

skonto avatar Mar 21 '22 10:03 skonto

hey yeah, I have done the change in my dev and the readiness probe seems to work fine with it - waiting on @dprotaso's opinions on the suggested solution couple of comments above and also (same solution) here: https://knative.slack.com/archives/CA4DNJ9A4/p1646983509263459

Shashankft9 avatar Mar 21 '22 11:03 Shashankft9

We are also running on this issue, and were wondering if there was any recent update ? ( @Shashankft9 )

Thanks.

marc-laroche avatar Apr 12 '22 12:04 marc-laroche

/triage accepted

Thanks for the deep breakdown everyone - I think what we really need is a clear signal to distinguish between the Pod shutting down and either container (queue-proxy/user-container) restarting.

I'm not sure that's straightforward. Will dig into this for the next release.

I worry about the setting the timer to nil - because I believe in the case the Pod is shutting down it may let a request through rather than returning 503 as expected

/assign @dprotaso

dprotaso avatar May 10 '22 22:05 dprotaso

Hi, we are also facing this issue, is there any workaround?

langelini-photosi avatar Jul 11 '22 14:07 langelini-photosi

I see @dprotaso changed the milestone for this issue to v1.7.0 but I was wondering if anyone is aware of a workaround, as @langelini-photosi is also asking. I think this issue makes impossible to use all the 1.x versions in a production environment.

faxioman avatar Jul 12 '22 08:07 faxioman

It would be very interesting to know how google cloud got around this issue.

I tried a quick setup of cloud run for anthos and I have seen that it does not present this issue.

Anyway, on cloud run for anthos they provide their own queue-proxy: gke.gcr.io/knative/queue@sha256:2148fc4af5f5cda0594d97c3027b316bcba3c8f400d3d57087807f90e80641d3

I tried to use this queue-proxy image on my knative 1.3.1 in self managed cluster ad it seems to solve the issue.

I hope this could helps

pbenini avatar Jul 12 '22 14:07 pbenini

Hey @faxioman - the 1.6 release is today. I'll be taking a look at this for 1.7.

It would be very interesting to know how google cloud got around this issue.

Google's Cloud Run is a separate implementation and they've stopped contributing to upstream

dprotaso avatar Jul 12 '22 15:07 dprotaso

Thanks @dprotaso! Until then ... if anyone knows a workaround is well accepted. Currently we found only a "workaround": to convince the developers to build services where the liveness never fails ... 😅

faxioman avatar Jul 12 '22 15:07 faxioman