java icon indicating copy to clipboard operation
java copied to clipboard

Informer watch call doesn't respect timeout from CallGenerator

Open haoming-db opened this issue 3 years ago • 3 comments
trafficstars

Describe the bug The ReflectorRunnable sets a random 5-10mins timeout for watch call. However, this timeout doesn’t really affect watch call , because: in SharedInformerFactory, it implements a ListerWatcher whose watch call uses underlying client timeout. At SharedInformerFactory, line 212 returns the call with correct timeout (5-10mins): Call call = callGenerator.generate(params); However, line 214 generates a new Call object apiClient.getHttpClient().newCall(call.request()). The timeout of the new Call object uses the underlying OkHttpClient.callTimeoutMillis as call timeout, which is by default 0. I understand that the comment says it wants 0 (unlimited) read timeout, but it's not respecting the call timeout, which makes the random 5-10mins timeout for watch call meaningless in ReflectorRunnable.

This means SharedInformerFactory always sets watch call timeout to the callTimeoutMillis of the underlying OkHttpClient (by default 0, unlimited), but not the one set by ReflectorRunnable (5-10mins). If users use default settings, call timeout is 0 - there will never be client-side timeout triggered.

Client Version At least 13.0.0 and above

Kubernetes Version n/a

Java Version n/a

To Reproduce Create a new SharedInformerFactory using all default settings.

Expected behavior SharedInformerFactory's listerWatcherFor() should respect the call timeout from CallGenerator.

KubeConfig n/a

Server (please complete the following information): n/a

Additional context n/a

haoming-db avatar Mar 25 '22 19:03 haoming-db

To work around this problem, you can either: 1) implement your own ListerWatcher to replace the default one in SharedInformerFactory; or 2) explicitly set a non-0 timeout (for example 5-10mins) in your OkHttpClient.callTimeoutMillis .

haoming-db avatar Mar 25 '22 19:03 haoming-db

@brendandburns @yue9944882 Could you help to triage? Thanks!

haoming-db avatar Apr 19 '22 18:04 haoming-db

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 18 '22 19:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 17 '22 20:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 16 '22 20:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 16 '22 20:09 k8s-ci-robot