python
python copied to clipboard
listing csi node resource doesn't work
What happened (please include outputs or screenshots):
I am trying to get a list of csi node objects in my Kubernetes cluster, using client.StorageV1Api().list_csi_node()
function, but I get an error. I am able to use the client.StorageV1Api().read_csi_node()
function and read specific csi node object, but I want to get a list of all of them. I am running my code from inside a container in my cluster that has the right permissions.
I can get csidrivers
resource objects when I am using this python client.
this is the error:
>>> storage_api.list_csi_node()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api/storage_v1_api.py", line 2123, in list_csi_node
return self.list_csi_node_with_http_info(**kwargs) # noqa: E501
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api/storage_v1_api.py", line 2230, in list_csi_node_with_http_info
return self.api_client.call_api(
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 192, in __call_api
return_data = self.deserialize(response_data, response_type)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 264, in deserialize
return self.__deserialize(data, response_type)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 280, in __deserialize
return [self.__deserialize(sub_data, sub_kls)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 280, in <listcomp>
return [self.__deserialize(sub_data, sub_kls)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 641, in __deserialize_model
instance = klass(**kwargs)
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/models/v1_csi_node_spec.py", line 52, in __init__
self.drivers = drivers
File "/usr/local/lib/python3.10/site-packages/kubernetes/client/models/v1_csi_node_spec.py", line 75, in drivers
raise ValueError("Invalid value for `drivers`, must not be `None`") # noqa: E501
ValueError: Invalid value for `drivers`, must not be `None`
What you expected to happen: to get a list of all the csi node objects in my Kubernetes cluster
How to reproduce it (as minimally and precisely as possible): create a pod in your Kubernetes cluster with permissions to list csinodes resource and try to use this module to list csi nodes Anything else we need to know?:
Environment:
- Kubernetes version (
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
- OS (e.g., MacOS 10.13.6):
rhel 7.8
- Python version (
python --version
)Python 3.10.4
- Python client version (
pip list | grep kubernetes
)kubernetes 23.3.0
/assign
Could you enable debug logging and see if the drivers: None
was from the API server response? It may be a similar issue to https://github.com/kubernetes-client/gen/issues/52
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.