ignoreDaemonSets is not working
Describe the bug A clear and concise description of what the bug is.
Client Version
e.g. 19.0.0
Kubernetes Version
e.g. 1.19.3
Java Version e.g. Java 17
To Reproduce Steps to reproduce the behavior: Kubectl.drain() .ignoreDaemonSets() .force() .name("nodeExample") .execute();
Expected behavior while drain daemonSets pods starts delete .
KubeConfig exmaple: clusters: - cluster: certificate-authority-data: example server: "exmaple" name: exmaple contexts: - context: cluster: exmaple namespace: aclever-users user: users.tech-user-test name: exmaple-tech-user-test current-context: exmaple kind: Config preferences: { } users: - name: users.tech-user-test user: privateKey: token: "shifr".
Server (please complete the following information):
- OS: [e.g. Linux]
- Environment [e.g. container]
Additional context
Can you provide more details about what you expect and what you are seeing?
Can you provide more details about what you expect and what you are seeing?
I expect that pods who controlled by DaemonSet will be not delete when I do drain, but it is not.
Example: "kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods "calico-node-pbzsr" is forbidden: User "system:serviceaccount:exmaple:users.tech-user-test" cannot delete resource "pods" in API group "" in the namespace "calico-system"","reason":"Forbidden","details":{"name":"calico-node-pbzsr","kind":"pods"},"code":403}
why k8s client tries to delete this ? thx for fb
This is how we check for membership in the DaemonSet:
https://github.com/kubernetes-client/java/blob/master/extended/src/main/java/io/kubernetes/client/extended/kubectl/KubectlDrain.java#L76
Specifically looking for an owner reference with kind DaemonSet.
Can you share the YAML for that calico pod (kubectl get pod -o yaml ...) so that we can see what the reference is set to?
This is how we check for membership in the DaemonSet:
https://github.com/kubernetes-client/java/blob/master/extended/src/main/java/io/kubernetes/client/extended/kubectl/KubectlDrain.java#L76
Specifically looking for an owner reference with kind
DaemonSet.Can you share the YAML for that calico pod (
kubectl get pod -o yaml ...) so that we can see what the reference is set to?
ownerReferences:
- apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: calico-node
Okey, let's check this code from your link
for (V1Pod pod : allPods.getItems()) {
// at this point we know, that we have to ignore daemon set pods
if (pod.getMetadata().getOwnerReferences() != null) {
for (V1OwnerReference ref : pod.getMetadata().getOwnerReferences()) {
if (ref.getKind().equals("DaemonSet")) {
continue;
}
}
}
deletePod(api, pod.getMetadata().getName(), pod.getMetadata().getNamespace());
}
return node;
If we had DaemonSet we anyway delete this pod because continue works on second for
maybe try this one
boolean isDaemonSetPod; for (V1Pod pod : allPods.getItems()) { // at this point we know, that we have to ignore daemon set pods isDaemonSetPod = false; if (pod.getMetadata().getOwnerReferences() != null) { for (V1OwnerReference ref : pod.getMetadata().getOwnerReferences()) { if (ref.getKind().equals("DaemonSet")) { isDaemonSetPod = true; break; } } } if (!isDaemonSetPod) { deletePod(api, pod.getMetadata().getName(), pod.getMetadata().getNamespace()); }
We'd be happy to take a PR with any improvements to that code.
Oh, I see you sent #3537 thank you! I will review it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.