Unable to deserialize models with attributes of type RuntimeRawExtension
Minimal example for ControllerRevision (requires an active kubernetes cluster with working kubeconfig):
from kubernetes import client, config
config.load_kube_config()
apps_v1beta1_api = client.AppsV1beta1Api()
controller_revisions = apps_v1beta1_api.list_controller_revision_for_all_namespaces()
Issue seems to be here: https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api_client.py#L617
if data is not None \
and klass.attribute_map[attr] in data \ # <-- here
and isinstance(data, (list, dict)):
value = data[klass.attribute_map[attr]]
kwargs[attr] = self.__deserialize(value, attr_type)
where it is looking for the 'Raw' attribute in data, but data is a freeform JSON object with no specific 'Raw' key. Because this conditional fails, nothing is ever added to kwargs, and RuntimeRawExtension is initialized without the required raw value.
Here's a patch to the openshift-restclient-python that at least partially works around this issue. https://github.com/openshift/openshift-restclient-python/pull/159/files
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
This is still broken. Can we port the fix from openshift to this library?
@devkid: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
This is still broken. Can we port the fix from openshift to this library?
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@fabianvf: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
/lifecycle frozen
Has it been fixed in latest release? If not, I would like to cherrypick the patch and fix this. :)
I hit the same bug while trying to get the pods owned by a daemonset without looking at all pods in a namespace. Any chance to fix it?