CRD status is not deployed
**What happened: Registering a CRD with:
subresources:
scale:
labelSelectorPath: .status.selector
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
status: {}
is not registered in k8s. after testing from command line using kubectl get crd some.custom.crd.ai -o yaml
the result yaml is:
subresources:
scale:
labelSelectorPath: .status.selector
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
status is missing
What you expected to happen:
status should exist so that using kubernetes command (kubernnetes package):
custom_objects_api.get_namespaced_custom_object(group=self.group, version=self.version, namespace=self.namespace, plural=self.plural, name=self.name)
would work
How to reproduce it (as minimally and precisely as possible):
Deploy a CustomResourceDefinition resource that has spec.versions.subresources.status as {} (dict)
check the deployed CRD resource yaml
get crd some.resource.name.ai -o yaml
Anything else we need to know?: Tried to downgrade to kubernetes 28.1.0, same result to comply to hikaru version (1.3.0)
Environment:
- Kubernetes version (
kubectl version):- Client Version: v1.27.2
- Kustomize Version: v5.0.1
- Server Version: v1.27.14
- OS (e.g., MacOS 10.13.6):
- Python version (
python --version): 3.10.12 - Python client version (
pip list | grep kubernetes): 30.1.0 - hikaru version: 1.3.0
This seems to be a server-side issue. Have you verified if kubectl has the same problem?
using kubectl returned all values as expected
Hi @leonp-c,
I tried to reproduce the issue you reported, and everything worked as expected on my end. Here’s what I did:
- Deployed a CRD with spec.versions.subresources.status defined as {} using the Kubernetes Python client.
- Queried the resource both using kubectl and the Python client, and I was able to see the status field correctly populated in both cases.
If everything looks correct and the issue persists, feel free to share more details about your setup,
Hi @Bhargav-manepalli , It seems that the issue was related to hikaru module which i used to parse the yaml and create the resource. hikaru is removing/ignoring the empty dictionary field from the v1 object. A bug was opened on their git hikaru-43 Thank you for your effort.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.