List_stateful_set_for_all_namespaces
What happened (please include outputs or screenshots):
Traceback (most recent call last):
File "/Users/frenkdefrog/DevOps/poc/getresources.py", line 99, in available_replicas, must not be None") # noqa: E501
ValueError: Invalid value for available_replicas, must not be None
What you expected to happen: I wanted to gather all the statefulsets.
How to reproduce it (as minimally and precisely as possible): apiClient=client.CoreV1Api result = apiClient.list_stateful_set_for_all_namespaces()
Anything else we need to know?:
Environment:
- Kubernetes version (
kubectl version): 1.21 - OS (e.g., MacOS 10.13.6): Monterey
- Python version (
python --version): 3.9.10 - Python client version (
pip list | grep kubernetes):23.0.0-snapshot
Hi @frenkdefrog , I don't think this is a bug, this feature AvailableReplicas is supported in native kubernetes since version 1.22, you can find it from here.
Since your kubernetes version is 1.21, I propose that you can use kuberentes python-client version 21.0.0 to avoid this issue
Thanks @showjason. @frenkdefrog Please update your client version and see if the issue is fixed. Thanks!
/assign @showjason
Kubernetes version (kubectl version): 1.22 OS (e.g., MacOS 10.13.6): Monterey Python version (python --version): 3.9.10 Python client version (pip list | grep kubernetes):22.0.4
I am also facing the same issue. list_namespaced_stateful_set also gives the same error
Hi @atulGupta2922 , please follow this comment to debug your issue, to check if kubernetes responded with available_replicas .
BTW, I do not find kubernetes-client version 22.0.4, can you check it again?

I had this issue and downgrading to 21.7.0 appears to have fixed it. FWIW, I think this library should maintain backward compatibility and not error when these circumstances are encountered from an older version of kubernetes.
Any update on this?
I would also like an update on this. We were using 22.6.0 and see this issue after upgrading to 23.3.0. Since we cannot control the kubernetes version deployed at our customer sites, it is important that this client library be backward compatible. This is a blocker for me to upgrade to 23.3.0 until this is resolved.
status.availableReplicas is feature gated as documented here https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/#StatefulSetStatus Hence it is not guaranteed to be available on StatefulSet/status objects.
Also it appears to be the only field the client insists is not None on a StatefulSet/status object, which is odd.
I have a 1.22 cluster, and if there are ready pods in the statefulset, then available_replicas has a value and works.
But, if you scale the set down to 0 this value is removed from the API response and triggers this None exception. It is distasteful that blows up this way instead of letting it be None and letting developers interpret that.
Since we're on the topic of distasteful: In order to have reliable usage, I'm going to have to subprocess out to kubectl in order to work with statefulsets.
kubernetes client 23.3.0 to 21.7.0 help me resolved this issue
I was getting this assertion even when all pods in the statefulset were ready with 23.3.0. This issue was introduced with 23.3.0, all previous k8s-client releases worked fine - I had to backout to 22.6.0 and that works.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.