Fail watch gracefully on control plane upgrade
What happened (please include outputs or screenshots): When I run a watch on k8s jobs (probably applies to other objects) and k8s control plane gets upgraded in the meantime, the watcher just silently stops producing events. No exception is being raised. What you expected to happen: Exception being raised or events continuing to arrive How to reproduce it (as minimally and precisely as possible):
- start watcher:
from kubernetes import client, config, watch
config.load_incluster_config()
batch_v1_api = client.BatchV1Api()
watcher = watch.Watch()
for event in watcher.stream(
func=batch_v1_api.list_namespaced_job,
namespace="mynamespace",
timeout_seconds=0,
):
print(event)
- upgrade k8s control plane
- create some new jobs
- new events are not coming in. no exceptions are being raised
Anything else we need to know?: Reproduced on GKE stable channel with the last two control plane upgrades Environment:
- Kubernetes version (
kubectl version): 1.33.5-gke.1080000 - OS (e.g., MacOS 10.13.6): linux
- Python version (
python --version): 3.12.12 - Python client version (
pip list | grep kubernetes): 33.1.0
/help
@yliaog: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.