java icon indicating copy to clipboard operation
java copied to clipboard

ignoreDaemonSets is not working

Open Duranna66 opened this issue 1 year ago • 9 comments

Describe the bug A clear and concise description of what the bug is.

Client Version e.g. 19.0.0

Kubernetes Version e.g. 1.19.3

Java Version e.g. Java 17

To Reproduce Steps to reproduce the behavior: Kubectl.drain() .ignoreDaemonSets() .force() .name("nodeExample") .execute();

Expected behavior while drain daemonSets pods starts delete .

KubeConfig exmaple: clusters: - cluster: certificate-authority-data: example server: "exmaple" name: exmaple contexts: - context: cluster: exmaple namespace: aclever-users user: users.tech-user-test name: exmaple-tech-user-test current-context: exmaple kind: Config preferences: { } users: - name: users.tech-user-test user: privateKey: token: "shifr".

Server (please complete the following information):

  • OS: [e.g. Linux]
  • Environment [e.g. container]

Additional context

Duranna66 avatar Jul 02 '24 10:07 Duranna66

Can you provide more details about what you expect and what you are seeing?

brendandburns avatar Jul 02 '24 16:07 brendandburns

Can you provide more details about what you expect and what you are seeing?

I expect that pods who controlled by DaemonSet will be not delete when I do drain, but it is not. Example: "kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods "calico-node-pbzsr" is forbidden: User "system:serviceaccount:exmaple:users.tech-user-test" cannot delete resource "pods" in API group "" in the namespace "calico-system"","reason":"Forbidden","details":{"name":"calico-node-pbzsr","kind":"pods"},"code":403} image

why k8s client tries to delete this ? thx for fb

Duranna66 avatar Jul 03 '24 09:07 Duranna66

This is how we check for membership in the DaemonSet:

https://github.com/kubernetes-client/java/blob/master/extended/src/main/java/io/kubernetes/client/extended/kubectl/KubectlDrain.java#L76

Specifically looking for an owner reference with kind DaemonSet.

Can you share the YAML for that calico pod (kubectl get pod -o yaml ...) so that we can see what the reference is set to?

brendandburns avatar Jul 03 '24 16:07 brendandburns

This is how we check for membership in the DaemonSet:

https://github.com/kubernetes-client/java/blob/master/extended/src/main/java/io/kubernetes/client/extended/kubectl/KubectlDrain.java#L76

Specifically looking for an owner reference with kind DaemonSet.

Can you share the YAML for that calico pod (kubectl get pod -o yaml ...) so that we can see what the reference is set to?

ownerReferences:

  • apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: calico-node

Duranna66 avatar Jul 03 '24 19:07 Duranna66

Okey, let's check this code from your link

for (V1Pod pod : allPods.getItems()) { // at this point we know, that we have to ignore daemon set pods if (pod.getMetadata().getOwnerReferences() != null) { for (V1OwnerReference ref : pod.getMetadata().getOwnerReferences()) { if (ref.getKind().equals("DaemonSet")) { continue;
} } } deletePod(api, pod.getMetadata().getName(), pod.getMetadata().getNamespace()); } return node;

 If we had DaemonSet we anyway delete this pod because continue works on second for

Duranna66 avatar Jul 03 '24 21:07 Duranna66

maybe try this one

boolean isDaemonSetPod; for (V1Pod pod : allPods.getItems()) { // at this point we know, that we have to ignore daemon set pods isDaemonSetPod = false; if (pod.getMetadata().getOwnerReferences() != null) { for (V1OwnerReference ref : pod.getMetadata().getOwnerReferences()) { if (ref.getKind().equals("DaemonSet")) { isDaemonSetPod = true; break; } } } if (!isDaemonSetPod) { deletePod(api, pod.getMetadata().getName(), pod.getMetadata().getNamespace()); }

Duranna66 avatar Jul 03 '24 21:07 Duranna66

We'd be happy to take a PR with any improvements to that code.

brendandburns avatar Jul 08 '24 16:07 brendandburns

Oh, I see you sent #3537 thank you! I will review it.

brendandburns avatar Jul 08 '24 16:07 brendandburns

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 06 '24 16:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 05 '24 17:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 05 '24 18:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Dec 05 '24 18:12 k8s-ci-robot