kubernetes-client
kubernetes-client copied to clipboard
Deletion propagation support in KubernetesServer
Hi,
I'm not sure if this is desired to go that far in mocking the API Server, but it would be really great if KubernetesServer
was honoring the PropagationPolicy
during delete with respect to owned resources.
Maybe as a starter, it would be enough to have it implemented by always behaving as if it's a foreground policy (since a background one would need to do things in multiple threads…).
I'm guessing you're talking about the CRUD mode.
I'm not sure if you're only considering how this affects to the provided resource, or if it should also consider dependent resources.
If we are to consider the dependent resources (e.g. a Deployment is deleted, any dependent resources (ReplicaSet, Pod, etc.) will be deleted too), the first question should be go are those dependent resources created in CRUD mode. You most probably need to create them manually (if you create a Deployment, our mock server does nothing else but persist that entity). So it would be feasible to achieve what you want, but I'm not sure if that would be of any use (since you already need to create the dependent resources yourself).
If we are only considering the affected resource ("owner"), but not its dependents. For Foreground
propagation policy, a deletionTimestamp
should be added to the resource and a thread should be spawned to delete the resource later. For Background
and Orphan
, the current behavior applies.
Anyway, I'm not that sure that this brings added value. What's your use case?
@manusa good points, all of them!
My use case: I'm implementing an operator that creates jobs and secrets. I manually specify the job as the owner of the secret, so that when I delete the job, the secret is deleted by the deletion propagation. I want to test this behaviour and also rely on it for testing other aspects of the operator.
So it's indeed for CRUD mode, and for deleting dependent resources. So yes I would not expect the mock server to create any dependent resources by itself in this case. I would expect that if there are matching resources they got deleted along their owner.
Concerning Foreground
, I may have misunderstood the kubernetes API but I would not expect the owner to be deleted later. I would only expect that dependents are deleted synchronously with the deletion of the owner. No?
I manually specify the job as the owner of the secret
Given this use case, it does make sense indeed to implement this functionality.
Concerning Foreground, I may have misunderstood the kubernetes API but I would not expect the owner to be deleted later. I would only expect that dependents are deleted synchronously with the deletion of the owner. No?
You can read more about how this works here: https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#controlling-how-the-garbage-collector-deletes-dependents
Foreground cascading deletion (...) the root object first enters a "deletion in progress" state (...) Once the "deletion in progress" state is set, the garbage collector deletes the object's dependents. Once the garbage collector has deleted all "blocking" dependents (objects with ownerReference.blockOwnerDeletion=true), it deletes the owner object.
So the owner object is deleted last, and only if all the dependent objects are deleted. I don't think there's any scenario where Kubernetes deletes objects synchronously. This is also related to what we discussed in #3246
So the owner object is deleted last, and only if all the dependent objects are deleted. I don't think there's any scenario where Kubernetes deletes objects synchronously. This is also related to what we discussed in #3246
Indeed, thank you again for the explanation, it makes more sense now to me :)
I don't think we would really need to implement like this in the Mock server though. At least for my use case I suppose.
So an initial implementation would be to delete dependent Objects, despite the DeletionPropagation. Anyway, I think that with the initial complexity this enhancement requires, taking into consideration the propagation policy would be just a minor additional problem.
This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!
This is still an active subject :)
This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!
+1 for this (one day)..
BTW, I assume (or so it appears) that this also affects io.fabric8.openshift.client.server.mock.OpenShiftServer
.
We recently added the Kube API Test module.
This module is intended for this scenario and provides the extended capabilities you require. It will deploy a real Kube API server.
Hey @manusa, that looks really cool! I will have to dig into that when I get a bit of time but it seems to clearly answer my needs :)