serving icon indicating copy to clipboard operation
serving copied to clipboard

Timeout issue on long requests

Open nicolasmingo opened this issue 3 years ago • 12 comments

/area networking

Hi, I'm trying to use knative functions to make streaming http requests. It works well but I have an issue with long requests. After 5 minutes I receive a timeout error on queue-proxy and my request is closed.

I've tried some solutions but I had other issues: Modify the config-map named "config-defaults" (on knative-serving namespace)

max-revision-timeout-seconds: '21600' #6 hours
revision-timeout-seconds: '21600' # 6 hours

It works well BUT when the pod wants to be terminating, he is stuck in Terminating status. I've discovered that knative copied max-revision-timeout-seconds to terminationGracePeriodSeconds. So, I've tried to patch my knative ksvc yaml to override the terminationGracePeriodSeconds parameter (normally it is in a PodSpec), but it's seems impossible to change it through knative.

Can you give me information on how can I set up the configuration for my need please ?

Nicolas

nicolasmingo avatar Jan 27 '22 15:01 nicolasmingo

Hi @nicolasmingo!

I've discovered that knative copied max-revision-timeout-seconds to terminationGracePeriodSeconds.

Could you clarify what you mean here a bit? When the revision deployment is created what is copied is the revision's timeoutSeconds field (more on this bellow): https://github.com/knative/serving/blob/1d9529495e175e7b36d3636d4d5c8739774ef876/pkg/reconciler/revision/resources/deploy.go#L208

So, I've tried to patch my knative ksvc yaml to override the terminationGracePeriodSeconds parameter (normally it is in a PodSpec)

To clarify a few points. Copying from config-defaults cm: https://github.com/knative/serving/blob/1d9529495e175e7b36d3636d4d5c8739774ef876/config/core/configmaps/defaults.yaml#L40-L56

  • Each revision will get a 5min default value if nothing is specified in its related timeoutSeconds field. https://github.com/knative/serving/blob/main/pkg/apis/serving/v1/revision_defaults.go#L53 You can find more about revision config here: https://github.com/knative/serving/blob/main/docs/serving-api.md#revision. Recently we added a maxDurationSeconds field which is wip and will apply per revision (defaults will be introduced in the next release).
  • Each revision timeout will get validated against the maximum revision timeout (default 10min). https://github.com/knative/serving/blob/1d9529495e175e7b36d3636d4d5c8739774ef876/pkg/apis/serving/v1/revision_validation.go#L171
  • Activator's terminationGracePeriod needs to be updated accordingly if you modify defaults because requests may go through activator (10min), check here: https://github.com/knative/serving/blob/1d9529495e175e7b36d3636d4d5c8739774ef876/config/core/deployments/activator.yaml#L117-L124

Could you try update the default configs first to allow a larger maximum if you need to and then set a revision timeoutSeconds accordingly? Also dont forget about activator's grace period.

skonto avatar Feb 03 '22 14:02 skonto

thanks @skonto. your solution works for me.

little-eyes avatar Mar 07 '22 07:03 little-eyes

Hi, thank you for your answers.

I have continued to inspect and I would like to share what I see.

  • If I request a function that sends to me (as a http response) a streaming flux which takes more than 15Minutes: it works.
  • If I request a function that receives (in a http request) a streaming flux (that I send) and sends me back a result after 15minutes, the connection is cut (depends on timeoutSeconds value) even if i'm sending data (in upload) during this time.
  • If I send and receive flux at the same time, it works but sometimes I have other problems with envoy-istio regarding reliability (upstream_reset_after_response_started{protocol_error})

Maybe knative is only inspecting the response flux to detect timeout? Do you know how the second part can work in knative

nicolasmingo avatar Mar 29 '22 09:03 nicolasmingo

Hi, for the istio issue it seems that at some point the connection is rejected due to a malformed http request. Could you enable debug logs at the istio side and post them? Also let's create two separate issues to discuss further (bullets 2,3). @nak3 may have more to add here.

skonto avatar Mar 31 '22 09:03 skonto

Hi, I've found the issue. It is not related to istio, we try the same code without knative and it has worked.

I think it is related to the incompatibility to do bidirectional streaming in HTTP/1.1: https://github.com/knative/serving/issues/6146

At the origin, I would like to do long requests with upload stream - the only way to do that (without bidirectional streaming) is to increase timeout to 3600s because the http feedback can arrive very late. If I configure 3600s to timeout I have another problem in Knative: my pod will stay at Terminating status during 3600s. If I have a way to override the terminationGracePeriod by a custom value (instead of timeoutSeconds), so I will reach my goal.

nicolasmingo avatar Mar 31 '22 11:03 nicolasmingo

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

github-actions[bot] avatar Jun 30 '22 01:06 github-actions[bot]

@skonto thank you so much for your clarification on the timeout mechanism! Is it possible to configure Knative to have no timeout on requests (or a timeout around 1h) and a TerminationGracePeriodSeconds: 0?

henriquevasconcelos avatar Jul 27 '22 18:07 henriquevasconcelos

/remove-lifecycle stale

henriquevasconcelos avatar Jul 27 '22 18:07 henriquevasconcelos

You might also want to look into the async component.

cc @AngeloDanducci

psschwei avatar Jul 27 '22 18:07 psschwei

You might also want to look into the async component.

cc @AngeloDanducci

Thanks for the reply @psschwei!

Ideally, we should be using async requests for this. However, the framework we're using requires completion of the request hence the long timeout :)

henriquevasconcelos avatar Jul 27 '22 19:07 henriquevasconcelos

Just to clarify my issue is that I have a request that needs 20 min to complete and after editing the timeoutSeconds I no longer get the timeout error but now my Pods are stuck on terminating, which did not happen before setting this flag.

Is there a way to terminate the pods faster, i.e., allow for a custom timeout and a timely termination of the pods?

Screen Shot 2022-07-29 at 10 43 04

henriquevasconcelos avatar Jul 29 '22 13:07 henriquevasconcelos

Is there a way to terminate the pods faster, i.e., allow for a custom timeout and a timely termination of the pods?

I think once #12970 lands you may be able to use the max duration without needing to tweak the termination grace period.

psschwei avatar Aug 01 '22 17:08 psschwei

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

github-actions[bot] avatar Oct 31 '22 01:10 github-actions[bot]

This issue or pull request is stale because it has been open for 90 days with no activity.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close

/lifecycle stale

knative-prow-robot avatar Nov 30 '22 02:11 knative-prow-robot

Hello everyone.

I am also interested in setting max-revision-timeout-seconds to a larger value. However I can't find how to update the terminationGracePeriod in activator.

I am using the operator to install Knative Serving:

apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
  namespace: knative-serving
spec:
  version: 1.8.0 # Same version as the operator
  config:
    defaults:
      max-revision-timeout-seconds: "7200" # 120 minutes
    domain:
      kn.example.com: "" # your DNS
    features:
      kubernetes.podspec-persistent-volume-claim: "enabled"
      kubernetes.podspec-persistent-volume-write: "enabled"
      kubernetes.podspec-affinity: "enabled"
      kubernetes.podspec-tolerations: "enabled"
      kubernetes.podspec-init-containers: "enabled"
    istio:
      gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.istio-ingress.svc.cluster.local

Does anyone know how to achieve this?

mkyrilov avatar Dec 08 '22 20:12 mkyrilov

One way is to just edit the Activator deployment, i.e. kubectl edit deploy -n knative-serving activator and then update the terminationGracePeriodSeconds field.

cc @houshengbo for if there's a way to do that in the operator

psschwei avatar Dec 08 '22 20:12 psschwei

I did try editing the activator deployment manually. It updates fine, but then immediately gets reverted back. I am assuming the operator does this.

mkyrilov avatar Dec 08 '22 20:12 mkyrilov

In that case, since it's operator-specific, let's move the discussion over there... I opened https://github.com/knative/operator/issues/1295 with your question.

psschwei avatar Dec 08 '22 20:12 psschwei

Hello @psschwei

I think once https://github.com/knative/serving/pull/12970 lands you may be able to use the max duration without needing to tweak the termination grace period.

It looks like the termination grace period is still taken from timeoutSeconds which makes impossible to specify them separately.

I'm one of those who wants to handle long requests :) Despite our requests may take many minutes we don't want to wait for the same time until the pod is terminated.

Thanks in advance for help!

Kaiass avatar Dec 22 '22 16:12 Kaiass

Ok, in that case I think the alternative then as things stands would be to see if the async component might fit your needs.

psschwei avatar Jan 09 '23 17:01 psschwei

From the history above, I got:

  • The initial question was resolved
  • The operator configuration is handled in https://github.com/knative/serving/issues/12564#issuecomment-1343345001
  • Kaiass use-case (https://github.com/knative/serving/issues/12564#issuecomment-1363028807) is better handled by the async component.

Seems like no more points are open, thus I'm closing this. Feel free to reopen, if I'm mistaking.

/close

ReToCode avatar Feb 15 '23 12:02 ReToCode

@ReToCode: Closing this issue.

In response to this:

From the history above, I got:

  • The initial question was resolved
  • The operator configuration is handled in https://github.com/knative/serving/issues/12564#issuecomment-1343345001
  • Kaiass use-case (https://github.com/knative/serving/issues/12564#issuecomment-1363028807) is better handled by the async component.

Seems like no more points are open, thus I'm closing this. Feel free to reopen, if I'm mistaking.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

knative-prow[bot] avatar Feb 15 '23 12:02 knative-prow[bot]

Is anybody seeing issues with the latest version of knative? 1.12

everity":"ERROR","timestamp":"2023-12-14T00:14:02.568534211Z","logger":"queueproxy","caller":"network/error_handler.go:33","message":"error reverse proxying request; sockstat: sockets: used 10\nTCP: inuse 2 orphan 3 tw 14 alloc 759 mem 253\nUDP: inuse 0 mem 0\nUDPLITE: inuse 0\nRAW: inuse 0\nFRAG: inuse 0 memory 0\n","commit":"2659cc3","knative.dev/key":"dxcm/plugin-github-00003","knative.dev/pod":"plugin-github-00003-deployment-7f9f764f87-8jvt2","error":"context canceled","stacktrace":"knative.dev/pkg/network.ErrorHandler.func1\n\tknative.dev/[email protected]/network/error_handler.go:33\nnet/http/httputil.(*ReverseProxy).ServeHTTP\n\tnet/http/httputil/reverseproxy.go:475\nknative.dev/serving/pkg/queue.(*appRequestMetricsHandler).ServeHTTP\n\tknative.dev/serving/pkg/queue/request_metric.go:199\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ProxyHandler.func3\n\tknative.dev/serving/pkg/queue/handler.go:76\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ForwardedShimHandler.func4\n\tknative.dev/serving/pkg/queue/forwarded_shim.go:54\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/http/handler.(*timeoutHandler).ServeHTTP.func4\n\tknative.dev/serving/pkg/http/handler/timeout.go:118"}
{"severity":"ERROR","timestamp":"2023-12-14T00:24:04.591008104Z","logger":"queueproxy","caller":"network/error_handler.go:33","message":"error reverse proxying request; sockstat: sockets: used 10\nTCP: inuse 2 orphan 3 tw 12 alloc 595 mem 230\nUDP: inuse 0 mem 0\nUDPLITE: inuse 0\nRAW: inuse 0\nFRAG: inuse 0 memory 0\n","commit":"2659cc3","knative.dev/key":"dxcm/plugin-github-00003","knative.dev/pod":"plugin-github-00003-deployment-7f9f764f87-8jvt2","error":"context canceled","stacktrace":"knative.dev/pkg/network.ErrorHandler.func1\n\tknative.dev/[email protected]/network/error_handler.go:33\nnet/http/httputil.(*ReverseProxy).ServeHTTP\n\tnet/http/httputil/reverseproxy.go:475\nknative.dev/serving/pkg/queue.(*appRequestMetricsHandler).ServeHTTP\n\tknative.dev/serving/pkg/queue/request_metric.go:199\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ProxyHandler.func3\n\tknative.dev/serving/pkg/queue/handler.go:76\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ForwardedShimHandler.func4\n\tknative.dev/serving/pkg/queue/forwarded_shim.go:54\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/http/handler.(*timeoutHandler).ServeHTTP.func4\n\tknative.dev/serving/pkg/http/handler/timeout.go:118"}
{"severity":"ERROR","timestamp":"2023-12-14T00:34:08.632324508Z","logger":"queueproxy","caller":"network/error_handler.go:33","message":"error reverse proxying request; sockstat: sockets: used 10\nTCP: inuse 2 orphan 2 tw 10 alloc 602 mem 204\nUDP: inuse 0 mem 0\nUDPLITE: inuse 0\nRAW: inuse 0\nFRAG: inuse 0 memory 0\n","commit":"2659cc3","knative.dev/key":"dxcm/plugin-github-00003","knative.dev/pod":"plugin-github-00003-deployment-7f9f764f87-8jvt2","error":"context canceled","stacktrace":"knative.dev/pkg/network.ErrorHandler.func1\n\tknative.dev/[email protected]/network/error_handler.go:33\nnet/http/httputil.(*ReverseProxy).ServeHTTP\n\tnet/http/httputil/reverseproxy.go:475\nknative.dev/serving/pkg/queue.(*appRequestMetricsHandler).ServeHTTP\n\tknative.dev/serving/pkg/queue/request_metric.go:199\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ProxyHandler.func3\n\tknative.dev/serving/pkg/queue/handler.go:76\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ForwardedShimHandler.func4\n\tknative.dev/serving/pkg/queue/forwarded_shim.go:54\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/http/handler.(*timeoutHandler).ServeHTTP.func4\n\tknative.dev/serving/pkg/http/handler/timeout.go:118"}
apiVersion: v1
data:
  enable-service-links: "false"
  max-revision-timeout-seconds: "60000"
  revision-timeout-seconds: "60000"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"enable-service-links":"false","max-revision-timeout-seconds":"60000","revision-timeout-seconds":"60000"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"config-defaults","namespace":"knative-serving"}}
  creationTimestamp: "2023-11-28T16:59:46Z"
  name: config-defaults
  namespace: knative-serving
  resourceVersion: "24361043"
  uid: 2496e813-166c-4756-aa7f-f1a83f946cdd

Updated activator as well. Validated the max is getting picked up. However, I am still seeing timeouts at exactly 10 minutes.

nkreiger avatar Dec 14 '23 00:12 nkreiger