serving
serving copied to clipboard
Timeout issue on long requests
/area networking
Hi, I'm trying to use knative functions to make streaming http requests. It works well but I have an issue with long requests. After 5 minutes I receive a timeout error on queue-proxy and my request is closed.
I've tried some solutions but I had other issues: Modify the config-map named "config-defaults" (on knative-serving namespace)
max-revision-timeout-seconds: '21600' #6 hours
revision-timeout-seconds: '21600' # 6 hours
It works well BUT when the pod wants to be terminating, he is stuck in Terminating status. I've discovered that knative copied max-revision-timeout-seconds to terminationGracePeriodSeconds. So, I've tried to patch my knative ksvc yaml to override the terminationGracePeriodSeconds parameter (normally it is in a PodSpec), but it's seems impossible to change it through knative.
Can you give me information on how can I set up the configuration for my need please ?
Nicolas
Hi @nicolasmingo!
I've discovered that knative copied max-revision-timeout-seconds to terminationGracePeriodSeconds.
Could you clarify what you mean here a bit? When the revision deployment is created what is copied is the revision's timeoutSeconds field (more on this bellow): https://github.com/knative/serving/blob/1d9529495e175e7b36d3636d4d5c8739774ef876/pkg/reconciler/revision/resources/deploy.go#L208
So, I've tried to patch my knative ksvc yaml to override the terminationGracePeriodSeconds parameter (normally it is in a PodSpec)
To clarify a few points. Copying from config-defaults cm: https://github.com/knative/serving/blob/1d9529495e175e7b36d3636d4d5c8739774ef876/config/core/configmaps/defaults.yaml#L40-L56
- Each revision will get a 5min default value if nothing is specified in its related timeoutSeconds field. https://github.com/knative/serving/blob/main/pkg/apis/serving/v1/revision_defaults.go#L53 You can find more about revision config here: https://github.com/knative/serving/blob/main/docs/serving-api.md#revision. Recently we added a maxDurationSeconds field which is wip and will apply per revision (defaults will be introduced in the next release).
- Each revision timeout will get validated against the maximum revision timeout (default 10min). https://github.com/knative/serving/blob/1d9529495e175e7b36d3636d4d5c8739774ef876/pkg/apis/serving/v1/revision_validation.go#L171
- Activator's terminationGracePeriod needs to be updated accordingly if you modify defaults because requests may go through activator (10min), check here: https://github.com/knative/serving/blob/1d9529495e175e7b36d3636d4d5c8739774ef876/config/core/deployments/activator.yaml#L117-L124
Could you try update the default configs first to allow a larger maximum if you need to and then set a revision timeoutSeconds accordingly? Also dont forget about activator's grace period.
thanks @skonto. your solution works for me.
Hi, thank you for your answers.
I have continued to inspect and I would like to share what I see.
- If I request a function that sends to me (as a http response) a streaming flux which takes more than 15Minutes: it works.
- If I request a function that receives (in a http request) a streaming flux (that I send) and sends me back a result after 15minutes, the connection is cut (depends on timeoutSeconds value) even if i'm sending data (in upload) during this time.
- If I send and receive flux at the same time, it works but sometimes I have other problems with envoy-istio regarding reliability (upstream_reset_after_response_started{protocol_error})
Maybe knative is only inspecting the response flux to detect timeout? Do you know how the second part can work in knative
Hi, for the istio issue it seems that at some point the connection is rejected due to a malformed http request. Could you enable debug logs at the istio side and post them? Also let's create two separate issues to discuss further (bullets 2,3). @nak3 may have more to add here.
Hi, I've found the issue. It is not related to istio, we try the same code without knative and it has worked.
I think it is related to the incompatibility to do bidirectional streaming in HTTP/1.1: https://github.com/knative/serving/issues/6146
At the origin, I would like to do long requests with upload stream - the only way to do that (without bidirectional streaming) is to increase timeout to 3600s because the http feedback can arrive very late. If I configure 3600s to timeout I have another problem in Knative: my pod will stay at Terminating status during 3600s. If I have a way to override the terminationGracePeriod by a custom value (instead of timeoutSeconds), so I will reach my goal.
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
@skonto thank you so much for your clarification on the timeout mechanism!
Is it possible to configure Knative to have no timeout on requests (or a timeout around 1h) and a TerminationGracePeriodSeconds: 0
?
/remove-lifecycle stale
You might also want to look into the async component.
cc @AngeloDanducci
Thanks for the reply @psschwei!
Ideally, we should be using async requests for this. However, the framework we're using requires completion of the request hence the long timeout :)
Just to clarify my issue is that I have a request that needs 20 min to complete and after editing the timeoutSeconds I no longer get the timeout error but now my Pods are stuck on terminating, which did not happen before setting this flag.
Is there a way to terminate the pods faster, i.e., allow for a custom timeout and a timely termination of the pods?

Is there a way to terminate the pods faster, i.e., allow for a custom timeout and a timely termination of the pods?
I think once #12970 lands you may be able to use the max duration without needing to tweak the termination grace period.
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
This issue or pull request is stale because it has been open for 90 days with no activity.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
/lifecycle stale
Hello everyone.
I am also interested in setting max-revision-timeout-seconds
to a larger value. However I can't find how to update the terminationGracePeriod
in activator.
I am using the operator to install Knative Serving:
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: 1.8.0 # Same version as the operator
config:
defaults:
max-revision-timeout-seconds: "7200" # 120 minutes
domain:
kn.example.com: "" # your DNS
features:
kubernetes.podspec-persistent-volume-claim: "enabled"
kubernetes.podspec-persistent-volume-write: "enabled"
kubernetes.podspec-affinity: "enabled"
kubernetes.podspec-tolerations: "enabled"
kubernetes.podspec-init-containers: "enabled"
istio:
gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.istio-ingress.svc.cluster.local
Does anyone know how to achieve this?
One way is to just edit the Activator deployment, i.e. kubectl edit deploy -n knative-serving activator
and then update the terminationGracePeriodSeconds
field.
cc @houshengbo for if there's a way to do that in the operator
I did try editing the activator deployment manually. It updates fine, but then immediately gets reverted back. I am assuming the operator does this.
In that case, since it's operator-specific, let's move the discussion over there... I opened https://github.com/knative/operator/issues/1295 with your question.
Hello @psschwei
I think once https://github.com/knative/serving/pull/12970 lands you may be able to use the max duration without needing to tweak the termination grace period.
It looks like the termination grace period is still taken from timeoutSeconds
which makes impossible to specify them separately.
I'm one of those who wants to handle long requests :) Despite our requests may take many minutes we don't want to wait for the same time until the pod is terminated.
Thanks in advance for help!
Ok, in that case I think the alternative then as things stands would be to see if the async component might fit your needs.
From the history above, I got:
- The initial question was resolved
- The operator configuration is handled in https://github.com/knative/serving/issues/12564#issuecomment-1343345001
- Kaiass use-case (https://github.com/knative/serving/issues/12564#issuecomment-1363028807) is better handled by the async component.
Seems like no more points are open, thus I'm closing this. Feel free to reopen, if I'm mistaking.
/close
@ReToCode: Closing this issue.
In response to this:
From the history above, I got:
- The initial question was resolved
- The operator configuration is handled in https://github.com/knative/serving/issues/12564#issuecomment-1343345001
- Kaiass use-case (https://github.com/knative/serving/issues/12564#issuecomment-1363028807) is better handled by the async component.
Seems like no more points are open, thus I'm closing this. Feel free to reopen, if I'm mistaking.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Is anybody seeing issues with the latest version of knative? 1.12
everity":"ERROR","timestamp":"2023-12-14T00:14:02.568534211Z","logger":"queueproxy","caller":"network/error_handler.go:33","message":"error reverse proxying request; sockstat: sockets: used 10\nTCP: inuse 2 orphan 3 tw 14 alloc 759 mem 253\nUDP: inuse 0 mem 0\nUDPLITE: inuse 0\nRAW: inuse 0\nFRAG: inuse 0 memory 0\n","commit":"2659cc3","knative.dev/key":"dxcm/plugin-github-00003","knative.dev/pod":"plugin-github-00003-deployment-7f9f764f87-8jvt2","error":"context canceled","stacktrace":"knative.dev/pkg/network.ErrorHandler.func1\n\tknative.dev/[email protected]/network/error_handler.go:33\nnet/http/httputil.(*ReverseProxy).ServeHTTP\n\tnet/http/httputil/reverseproxy.go:475\nknative.dev/serving/pkg/queue.(*appRequestMetricsHandler).ServeHTTP\n\tknative.dev/serving/pkg/queue/request_metric.go:199\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ProxyHandler.func3\n\tknative.dev/serving/pkg/queue/handler.go:76\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ForwardedShimHandler.func4\n\tknative.dev/serving/pkg/queue/forwarded_shim.go:54\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/http/handler.(*timeoutHandler).ServeHTTP.func4\n\tknative.dev/serving/pkg/http/handler/timeout.go:118"}
{"severity":"ERROR","timestamp":"2023-12-14T00:24:04.591008104Z","logger":"queueproxy","caller":"network/error_handler.go:33","message":"error reverse proxying request; sockstat: sockets: used 10\nTCP: inuse 2 orphan 3 tw 12 alloc 595 mem 230\nUDP: inuse 0 mem 0\nUDPLITE: inuse 0\nRAW: inuse 0\nFRAG: inuse 0 memory 0\n","commit":"2659cc3","knative.dev/key":"dxcm/plugin-github-00003","knative.dev/pod":"plugin-github-00003-deployment-7f9f764f87-8jvt2","error":"context canceled","stacktrace":"knative.dev/pkg/network.ErrorHandler.func1\n\tknative.dev/[email protected]/network/error_handler.go:33\nnet/http/httputil.(*ReverseProxy).ServeHTTP\n\tnet/http/httputil/reverseproxy.go:475\nknative.dev/serving/pkg/queue.(*appRequestMetricsHandler).ServeHTTP\n\tknative.dev/serving/pkg/queue/request_metric.go:199\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ProxyHandler.func3\n\tknative.dev/serving/pkg/queue/handler.go:76\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ForwardedShimHandler.func4\n\tknative.dev/serving/pkg/queue/forwarded_shim.go:54\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/http/handler.(*timeoutHandler).ServeHTTP.func4\n\tknative.dev/serving/pkg/http/handler/timeout.go:118"}
{"severity":"ERROR","timestamp":"2023-12-14T00:34:08.632324508Z","logger":"queueproxy","caller":"network/error_handler.go:33","message":"error reverse proxying request; sockstat: sockets: used 10\nTCP: inuse 2 orphan 2 tw 10 alloc 602 mem 204\nUDP: inuse 0 mem 0\nUDPLITE: inuse 0\nRAW: inuse 0\nFRAG: inuse 0 memory 0\n","commit":"2659cc3","knative.dev/key":"dxcm/plugin-github-00003","knative.dev/pod":"plugin-github-00003-deployment-7f9f764f87-8jvt2","error":"context canceled","stacktrace":"knative.dev/pkg/network.ErrorHandler.func1\n\tknative.dev/[email protected]/network/error_handler.go:33\nnet/http/httputil.(*ReverseProxy).ServeHTTP\n\tnet/http/httputil/reverseproxy.go:475\nknative.dev/serving/pkg/queue.(*appRequestMetricsHandler).ServeHTTP\n\tknative.dev/serving/pkg/queue/request_metric.go:199\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ProxyHandler.func3\n\tknative.dev/serving/pkg/queue/handler.go:76\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/queue/sharedmain.mainHandler.ForwardedShimHandler.func4\n\tknative.dev/serving/pkg/queue/forwarded_shim.go:54\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2136\nknative.dev/serving/pkg/http/handler.(*timeoutHandler).ServeHTTP.func4\n\tknative.dev/serving/pkg/http/handler/timeout.go:118"}
apiVersion: v1
data:
enable-service-links: "false"
max-revision-timeout-seconds: "60000"
revision-timeout-seconds: "60000"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"enable-service-links":"false","max-revision-timeout-seconds":"60000","revision-timeout-seconds":"60000"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"config-defaults","namespace":"knative-serving"}}
creationTimestamp: "2023-11-28T16:59:46Z"
name: config-defaults
namespace: knative-serving
resourceVersion: "24361043"
uid: 2496e813-166c-4756-aa7f-f1a83f946cdd
Updated activator as well. Validated the max is getting picked up. However, I am still seeing timeouts at exactly 10 minutes.