java
java copied to clipboard
Informer watch call doesn't respect timeout from CallGenerator
Describe the bug
The ReflectorRunnable sets a random 5-10mins timeout for watch call. However, this timeout doesn’t really affect watch call , because: in SharedInformerFactory, it implements a ListerWatcher whose watch call uses underlying client timeout.
At SharedInformerFactory, line 212 returns the call with correct timeout (5-10mins): Call call = callGenerator.generate(params);
However, line 214 generates a new Call object apiClient.getHttpClient().newCall(call.request()). The timeout of the new Call object uses the underlying OkHttpClient.callTimeoutMillis as call timeout, which is by default 0. I understand that the comment says it wants 0 (unlimited) read timeout, but it's not respecting the call timeout, which makes the random 5-10mins timeout for watch call meaningless in ReflectorRunnable.
This means SharedInformerFactory always sets watch call timeout to the callTimeoutMillis of the underlying OkHttpClient (by default 0, unlimited), but not the one set by ReflectorRunnable (5-10mins). If users use default settings, call timeout is 0 - there will never be client-side timeout triggered.
Client Version At least 13.0.0 and above
Kubernetes Version n/a
Java Version n/a
To Reproduce
Create a new SharedInformerFactory using all default settings.
Expected behavior
SharedInformerFactory's listerWatcherFor() should respect the call timeout from CallGenerator.
KubeConfig n/a
Server (please complete the following information): n/a
Additional context n/a
To work around this problem, you can either: 1) implement your own ListerWatcher to replace the default one in SharedInformerFactory; or 2) explicitly set a non-0 timeout (for example 5-10mins) in your OkHttpClient.callTimeoutMillis .
@brendandburns @yue9944882 Could you help to triage? Thanks!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.