client.read_namespaced_pod does not work via pod is created by CRD resource
What happened (please include outputs or screenshots): When I run the following script:
api_client = client.ApiClient(config.load_kube_config(config_path), pool_threads=1)
k8s_client = client.CoreV1Api(api_client)
k8s_resp = k8s_client.read_namespaced_pod(name="ists-1", namespace="qa-test")
print (str(k8s_resp))
I got error:
File "run.py", line 19, in <module>
main()
File "run.py", line 15, in main
k8s_resp = k8s_client.read_namespaced_pod(name="ists-0", namespace="qa-test")
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api/core_v1_api.py", line 23483, in read_namespaced_pod
return self.read_namespaced_pod_with_http_info(name, namespace, **kwargs) # noqa: E501
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api/core_v1_api.py", line 23584, in read_namespaced_pod_with_http_info
collection_formats=collection_formats)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 353, in call_api
_preload_content, _request_timeout, _host)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 192, in __call_api
return_data = self.deserialize(response_data, response_type)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 264, in deserialize
return self.__deserialize(data, response_type)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 281, in __deserialize
for sub_data in data]
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 281, in <listcomp>
for sub_data in data]
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 641, in __deserialize_model
instance = klass(**kwargs)
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/models/v1_pod_readiness_gate.py", line 52, in __init__
self.condition_type = condition_type
File "/home/users/venv/lib/python3.6/site-packages/kubernetes/client/models/v1_pod_readiness_gate.py", line 80, in condition_type
.format(condition_type, allowed_values)
ValueError: Invalid value for `condition_type` (InPlaceUpdateReady), must be one of ['ContainersReady', 'Initialized', 'PodScheduled', 'Ready']
What you expected to happen: I expect show the pod.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
my pod is created by CRD resource, so InPlaceUpdateReady status is custom properties.
Environment:
- Kubernetes version (
kubectl version): - OS (e.g., MacOS 10.13.6):
- Python version (
python --version)Python 3.6.4 - Python client version (
pip list | grep kubernetes)kubernetes 23.3.0
I run the following script ,status value show it . What should I do for work well?
kubectl -n qa-test get pod ists-1 -oyaml
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: InPlaceUpdateReady
- lastProbeTime: null
lastTransitionTime: "2022-02-16T11:48:32Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-02-16T11:48:44Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-02-16T11:48:44Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-02-16T11:48:31Z"
status: "True"
type: PodScheduled
This issue is similar to #1735.
The workaround is adding all of the customized pod status like InPlaceUpdateReady into allowed_values
/assign
This is being fixed in upstream. We will cut a new 1.23 client to backport the fix once the PR https://github.com/kubernetes/kubernetes/pull/108740 is merged
@roycaihw https://github.com/kubernetes/kubernetes/pull/108740 was merged
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.