python
python copied to clipboard
How to reduce the retry count for python client?
What happened (please include outputs or screenshots):
I am having a config file of a dead/deleted/inaccessible cluster. I am trying to accessing the cluster using kubernetes-client-python. By default kube-client retries it 3 times,
WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x00000000096E3860>: Failed to establish a new connection: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)': /api/v1/pods
WARNING Retrying (Retry(total=1,....... /api/v1/pods
WARNING Retrying (Retry(total=0,....... /api/v1/pods
After 3 retries it throws an exception.
Is there any way to reduce the count.
Example Code
from kubernetes import client, config
config.load_kube_config(config_file='location-for-kube-config')
v1 = client.CoreV1Api()
ret = v1.list_pod_for_all_namespaces()
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
What you expected to happen:
I need to configure the retry count for inaccessible/accessible k8s cluster.
How to reproduce it (as minimally and precisely as possible):
Delete the kubernetes cluster, then try to access the cluster using kubernetes-client-python. By default it will perform retry for 3 times.
Anything else we need to know?:
Environment:
-
Kubernetes version (
kubectl version
): Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"} Unable to connect to the server: dial tcp 149.129.128.208:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. -
OS: Windows 10
-
Python version (
python --version
) : Python 2.7.14 -
Python client version (
pip list | grep kubernetes
): kubernetes 10.0.1
this was fixed in upstream code generator: https://github.com/kubernetes-client/python/pull/780#issuecomment-474730247. ref https://github.com/kubernetes-client/python/issues/652
the latest version of this client was generated from openapi-generator 3.3.4, which didn't include the fix yet. It will be included when we do a release with a newer version.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Is there any update??
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Is this going to get fixed in the next release?
I'm currently using logging.getLogger(requests.packages.urllib3.__package__).setLevel(logging.ERROR)
as a workaround, but it would be better if I didn't have to.
/remove-lifecycle rotten
The upstream fix was included in openapi-generator 4.0.0 https://github.com/OpenAPITools/openapi-generator/pull/2460
Currently this repo is still using openapi-generator 3.3.4
We could either evaluate and use the newer version, or cherrypick the fix to this repo.
cc @palnabarun
/help
@roycaihw: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
In response to this:
The upstream fix was included in openapi-generator 4.0.0 https://github.com/OpenAPITools/openapi-generator/pull/2460
Currently this repo is still using openapi-generator 3.3.4
We could either evaluate and use the newer version, or cherrypick the fix to this repo.
cc @palnabarun
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@roycaihw I did evaluation of using the latest version of the openapi-generator. It looks promising, I faced some problems with a validator so far (https://github.com/kubernetes-client/gen/issues/145). There are a lot of new features, fixes and some breaking changes so we should upgrade the generator in the next release.
Thanks @tomplus. I think after we release 11.0.0 stable version, we can start evaluating the latest version of the openapi-generator in 12.0.0a1.
cc @scottilee
I am also interested in this feature
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Thanks @tomplus. I think after we release 11.0.0 stable version, we can start evaluating the latest version of the openapi-generator in 12.0.0a1.
cc @scottilee
Now that release 11.0.0 is out, has there been any work done on upgrading the version of openapi-generator used?
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Thanks @tomplus. I think after we release 11.0.0 stable version, we can start evaluating the latest version of the openapi-generator in 12.0.0a1.
cc @scottilee
Release 12.0.0 is out now. Has this been looked into yet? I'm still interested in this getting fixed.
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
I'm still interested in seeing this resolved.
/remove-lifecycle stale
like most API clients, i need to control timeouts and retries.
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
I'd still like to see a fix for this. /remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Please don't close yet, unless it's already fixed?
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Still needed. /remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I'd still like this. /remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Any update on this? /reopen