How to delete a custom object and wait until it is cleaned up with `propagation_policy`
What is the feature and why do you need it:
I'm trying to use Kubernetes python sdk to delete a CR with custom_api.delete_namespaced_custom_object. My CR has a finalizer and a controller will clean some dependant resources when a deletionTimeStamp is added on the CR. When the resources are cleaned up, the finalizer will be removed and the CR will then be cleaned up. My question is that how can I make the custom_api.delete_namespaced_custom_object fucntion return after my CR is cleaned up.
I've tried adding propagation_policy='Foreground' and 'Background' as a parameter to the function but it does not work: It deletes the CR, returns immediately but doesn't wait for it to be cleaned up.
kwargs = {
"propagation_policy": "Foreground", # or "Background"
"grace_period_seconds: 120
}
self.custom_api.delete_namespaced_custom_object(
"example.com",
"v1alpha1",
"default",
"mycrds",
"demo-cr",
**kwargs
)
Is it possible to implement this feature using propagation_policy or any other similar techniques?
Describe the solution you'd like to see:
self.custom_api.delete_namespaced_custom_object(..., propagation_policy="Foreground")
Such code may block until the CR is really deleted.
Your usage of Foreground policy is correct.
However, delete_namespaced_custom_object does not block on object deletion. You need to write your own code to wait for the object to be non-existing. kubectl achieves it by sending multiple API calls to the server. You can try running kubectl with -v=9 to see what APIs kubectl uses to wait for the deletion to finish.
Ref: https://kubernetes.io/docs/concepts/architecture/garbage-collection/#cascading-deletion
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.