python icon indicating copy to clipboard operation
python copied to clipboard

client.CoreV1Api().list_node() does not work

Open ApproximateIdentity opened this issue 2 years ago • 11 comments

What happened (please include outputs or screenshots):

When I run the following script:

from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
v1.list_node()

I expect it to list my nodes (running kubectl get nodeworks fine), but instead it throws the following error:

Traceback (most recent call last):
  File "/home/user/cluster-scaler/script.py", line 4, in <module>
    v1.list_node()
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 16844, in list_node
    return self.list_node_with_http_info(**kwargs)  # noqa: E501
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 16951, in list_node_with_http_info
    return self.api_client.call_api(
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 192, in __call_api
    return_data = self.deserialize(response_data, response_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 264, in deserialize
    return self.__deserialize(data, response_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
    kwargs[attr] = self.__deserialize(value, attr_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in __deserialize
    return [self.__deserialize(sub_data, sub_kls)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in <listcomp>
    return [self.__deserialize(sub_data, sub_kls)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
    kwargs[attr] = self.__deserialize(value, attr_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
    kwargs[attr] = self.__deserialize(value, attr_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in __deserialize
    return [self.__deserialize(sub_data, sub_kls)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in <listcomp>
    return [self.__deserialize(sub_data, sub_kls)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 641, in __deserialize_model
    instance = klass(**kwargs)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/models/v1_node_condition.py", line 76, in __init__
    self.type = type
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/models/v1_node_condition.py", line 219, in type
    raise ValueError(
ValueError: Invalid value for `type` (GcfsSnapshotterUnhealthy), must be one of ['DiskPressure', 'MemoryPressure', 'NetworkUnavailable', 'PIDPressure', 'Ready']

What you expected to happen:

I expect it to list the pods

How to reproduce it (as minimally and precisely as possible):

Script found above

Anything else we need to know?:

Environment:

  • Kubernetes version (kubectl version):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:38:05Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3-gke.1500", GitCommit:"dbfed6fd139873c88230073d9a1d7b8e7ac4c98e", GitTreeState:"clean", BuildDate:"2021-11-17T09:30:21Z", GoVersion:"go1.16.9b7", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g., MacOS 10.13.6):
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 11 (bullseye)
Release:        11
Codename:       bullseye
  • Python version (python --version)
$ python3 -V
Python 3.9.2 
  • Python client version (pip list | grep kubernetes)
$ pip3 freeze | grep kubernetes
kubernetes==23.3.0

ApproximateIdentity avatar Mar 04 '22 02:03 ApproximateIdentity

Maybe this is related to this bug:

https://github.com/kubernetes-client/python/issues/1733

I am using GKE by the way, so it seems like this is a problem with gcloud in addition to the other providers in that other bug report.

ApproximateIdentity avatar Mar 04 '22 02:03 ApproximateIdentity

I'm also getting a similar error when calling list_nod():

Problem encountered: Invalid value for `type` (CorruptDockerOverlay2), must be one of ['DiskPressure', 'MemoryPressure', 'NetworkUnavailable', 'PIDPressure', 'Ready']

I don't seem to get this error if I downgrade to version 22.6.0.

iameskild avatar Mar 07 '22 10:03 iameskild

Hey I added some types that allowed me to list under AKS @ApproximateIdentity @iameskild

https://github.com/kubernetes-client/python/pull/1739

jesskranz avatar Mar 08 '22 11:03 jesskranz

Thanks for bringing this to our attention. It's a regression and we are fixing it: https://github.com/kubernetes-client/python/pull/1739#discussion_r823255444

roycaihw avatar Mar 10 '22 01:03 roycaihw

This is being fixed in upstream. We will cut a new 1.23 client to backport the fix once the PR https://github.com/kubernetes/kubernetes/pull/108740 is merged

roycaihw avatar Mar 28 '22 16:03 roycaihw

This is being fixed in upstream. We will cut a new 1.23 client to backport the fix once the PR kubernetes/kubernetes#108740 is merged

Seems it is merged. Any info than fix will be available?

Usuychik avatar Apr 06 '22 12:04 Usuychik

Yes, I plan to cut a new release this week.

roycaihw avatar Apr 06 '22 16:04 roycaihw

has this fix been released yet ? do you know about others problems beetwen this client and gke ? i would like to create my own kube-scheduler at GKE

goloneczka avatar Apr 16 '22 09:04 goloneczka

We are still waiting for the upstream to cut a new patch release: https://github.com/kubernetes-client/python/issues/1773

roycaihw avatar Apr 18 '22 16:04 roycaihw

I'm also getting a similar error when calling list_nod():

Problem encountered: Invalid value for `type` (CorruptDockerOverlay2), must be one of ['DiskPressure', 'MemoryPressure', 'NetworkUnavailable', 'PIDPressure', 'Ready']

I don't seem to get this error if I downgrade to version 22.6.0.

@iameskild what did you downgrade exactly, and how ? do you mean client version ? EDIT X ( after crying ): your workaround is working: just added 'RUN pip install kubernetes==22.6.0'

goloneczka avatar Apr 23 '22 11:04 goloneczka

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 22 '22 12:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 21 '22 13:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 20 '22 13:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 20 '22 13:09 k8s-ci-robot