dashboard icon indicating copy to clipboard operation
dashboard copied to clipboard

Should use ordered termination when deleting StatefulSets

Open janetkuo opened this issue 7 years ago • 9 comments

Dashboard deletes a StatefulSet from the API, make its pods GC'ed by the server at the same time. This is not desirable behavior for an application that relies on ordered termination.

kubectl delete statefulset scales the statefulset down to 0 first, which is an ordered termination.

@bryk @kow3ns @foxish @erictune

janetkuo avatar Jul 25 '17 01:07 janetkuo

Thanks for information. We''ll fix that.

floreks avatar Jul 25 '17 06:07 floreks

@janetkuo Should it be implemented in the UI codebase? I'm not sure. If that's behavior users expect, this should be part of spec and be in the API server. Otherwise, this is implementation detail of kubectl and none of other clients will follow it. Think of, e.g., openshift console, GKE UI or any other client.

bryk avatar Jul 25 '17 10:07 bryk

@mhenc

bryk avatar Jul 25 '17 10:07 bryk

I guess behavior here is somehow similar for Jobs as they are not picked up by GC at all and we have to find and delete the Job pods manually. In both scenarios there is some additional logic required to handle resource deletion correctly.

floreks avatar Jul 25 '17 10:07 floreks

But for stateful set the pods are picked.

And anyway, implementing such logic is kubectl should not be the way to go. Look at kubectl rolling-update for replication controllers. Its been moved to the server as virtually everyone agreed that it should not be done in kubectl.

bryk avatar Jul 25 '17 11:07 bryk

I agree that it should be handled on the server side and not by clients. Even documentation does not say how custom client should handle specific actions. Do you think that we should not merge #2081 then and wait for kubernetes to fix such cases on their side?

floreks avatar Jul 25 '17 12:07 floreks

@bryk WDYT about #2081? Should we push it forward or wait for core to handle it?

floreks avatar Aug 01 '17 07:08 floreks

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta. /lifecycle stale

fejta-bot avatar Jan 02 '18 02:01 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale

fejta-bot avatar Feb 07 '18 09:02 fejta-bot