python icon indicating copy to clipboard operation
python copied to clipboard

CRD status is not deployed

Open leonp-c opened this issue 1 year ago • 4 comments

**What happened: Registering a CRD with:

subresources:
      scale:
        labelSelectorPath: .status.selector
        specReplicasPath: .spec.replicas
        statusReplicasPath: .status.replicas
      status: {}

is not registered in k8s. after testing from command line using kubectl get crd some.custom.crd.ai -o yaml the result yaml is:

    subresources:
      scale:
        labelSelectorPath: .status.selector
        specReplicasPath: .spec.replicas
        statusReplicasPath: .status.replicas

status is missing

What you expected to happen: status should exist so that using kubernetes command (kubernnetes package): custom_objects_api.get_namespaced_custom_object(group=self.group, version=self.version, namespace=self.namespace, plural=self.plural, name=self.name) would work

How to reproduce it (as minimally and precisely as possible): Deploy a CustomResourceDefinition resource that has spec.versions.subresources.status as {} (dict) check the deployed CRD resource yaml get crd some.resource.name.ai -o yaml

Anything else we need to know?: Tried to downgrade to kubernetes 28.1.0, same result to comply to hikaru version (1.3.0)

Environment:

  • Kubernetes version (kubectl version):
    • Client Version: v1.27.2
    • Kustomize Version: v5.0.1
    • Server Version: v1.27.14
    • OS (e.g., MacOS 10.13.6):
  • Python version (python --version): 3.10.12
  • Python client version (pip list | grep kubernetes): 30.1.0
  • hikaru version: 1.3.0

leonp-c avatar Aug 21 '24 14:08 leonp-c

This seems to be a server-side issue. Have you verified if kubectl has the same problem?

roycaihw avatar Aug 28 '24 20:08 roycaihw

using kubectl returned all values as expected

leonp-c avatar Sep 02 '24 07:09 leonp-c

Hi @leonp-c,

I tried to reproduce the issue you reported, and everything worked as expected on my end. Here’s what I did:

  1. Deployed a CRD with spec.versions.subresources.status defined as {} using the Kubernetes Python client.
  2. Queried the resource both using kubectl and the Python client, and I was able to see the status field correctly populated in both cases.

If everything looks correct and the issue persists, feel free to share more details about your setup,

Bhargav-manepalli avatar Sep 11 '24 08:09 Bhargav-manepalli

Hi @Bhargav-manepalli , It seems that the issue was related to hikaru module which i used to parse the yaml and create the resource. hikaru is removing/ignoring the empty dictionary field from the v1 object. A bug was opened on their git hikaru-43 Thank you for your effort.

leonp-c avatar Sep 12 '24 04:09 leonp-c

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 11 '24 12:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jan 10 '25 13:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Feb 09 '25 13:02 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Feb 09 '25 13:02 k8s-ci-robot