kubectl wait allow to wait on missing field
What would you like to be added:
Add a flag to signal that a field in jsonpath should not error if the field or object in chain does not exist.
Why is this needed:
I want to wait for a specific field in the status of a resources inside a script with the following command:
kubectl wait --for=jsonpath=.status.ready=true
But the status field does not exist on creation of the resource. This can result in a race condition if the controller does not add the field fast enough. If the field does not exist the wait command will error.
It would be nice if the wait command can wait on fields which will be created later on the resource.
/assign
/triage needs-information
Before deciding if we want to accept this I'd like to ensure we can differentiate between 2 diametrically different situations:
- when the field is not set (ie. has
omitemptyso it won't show up, but it exists in schema) - when the field does not exist, ie. was misspelled.
If we can differentiate, then I can definitely support option no. 1, since that will bring the value asked in the original comment. If we can't I'd prefer we error out.
In my case i want to wait on the kyverno clusterpolicy crd (https://htmlpreview.github.io/?https://github.com/kyverno/kyverno/blob/main/docs/crd/v1/index.html#kyverno.io/v1.ClusterPolicy)
It defines status as optional.
/triage accepted /remove-label triage/needs-information
@soltysh: The label(s) /remove-label triage/needs-information cannot be applied. These labels are supported: api-review, tide/merge-method-merge, tide/merge-method-rebase, tide/merge-method-squash, team/katacoda, refactor
In response to this:
/triage accepted /remove-label triage/needs-information
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/close
I believe this is resolved by #109525. Please reopen if it is not actually resolved.
@mpuckett159: Closing this issue.
In response to this:
/close
I believe this is resolved by #109525. Please reopen if it is not actually resolved.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.