kubectl
kubectl copied to clipboard
Support patch with selector
What would you like to be added: I came across a use-case where I would like to patch multiple things at once. Without looking I did:
$ kubectl patch nodes ...
error: resource(s) were provided, but no name, label selector, or --all flag specified
So seeing that error, I quickly remedied it by adding a selector as the error says I should do:
$ kubectl patch nodes --selector='mylabel' ...
Error: unknown flag: --selector
Well, interesting.
Why is this needed:
This is obviously possible in a two step process with get nodes and a loop, but it would seem nice to be able to patch multiple things using a selector.
This error is coming from the generic resource builder here and here.
I'm not sure if a conscious decision was made to not support label selectors or --all with patch. If so we might want to change that error to avoid confusion.
@mbrancato to help drill down the action here what's your use case for patching multiple nodes?
/triage needs-information
@eddiezane I wanted to remove labels on multiple nodes, but I guess this applies to anyone wanting to relabel an existing resource.
@mbrancato the kubectl label command allows you to remove labels with the syntax of foo- to remove the label foo. This works with label selectors and --all.
~ kubectl label nodes foo=bar --all
node/pikube-0 labeled
node/pikube-2 labeled
node/pikube-1 labeled
~ kubectl get nodes -l foo=bar
NAME STATUS ROLES AGE VERSION
pikube-0 Ready master 237d v1.18.6+k3s1
pikube-2 Ready <none> 237d v1.18.6+k3s1
pikube-1 Ready <none> 237d v1.18.6+k3s1
~ kubectl label nodes foo- --all
node/pikube-0 labeled
node/pikube-2 labeled
node/pikube-1 labeled
~ kubectl get nodes -l foo=bar
No resources found.
~ kubectl label nodes pikube-0 foo=bar
node/pikube-0 labeled
~ kubectl get nodes -l foo=bar
NAME STATUS ROLES AGE VERSION
pikube-0 Ready master 237d v1.18.6+k3s1
~ kubectl label nodes -l foo=bar foo-
node/pikube-0 labeled
~ kubectl get nodes -l foo=bar
No resources found.
It's tucked away as the last example of the help text. We could probably make that more clear.
Does that solve your use case?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This is still needed. I run into this yesterday when I wanted to clear conditions with a patch for all instances of my CRD.
/reopen /sig cli
@tnozicka: Reopened this issue.
In response to this:
This is still needed. I run into this yesterday when I wanted to clear conditions with a patch for all instances of my CRD.
/reopen /sig cli
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
I found this wanting to mass remove finalizers in order to clean out a namespace.
/triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
still very much needed
/reopen /assign
@tnozicka: Reopened this issue.
In response to this:
still very much needed
/reopen /assign
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I have a PR ready in https://github.com/kubernetes/kubernetes/pull/121673 /remove-lifecycle stale
/remove-lifecycle rotten
/remove-triage needs-information (already marked as triage/accepted)
I know there is a pending pull request here, but posting my hacky solution for those looking to have a workaround in the meantime. In my case I had about 50 jobs starting with batch-19 that I wanted to patch the number of parallel pods permitted. I get all the jobs, use awk to filter by the batch number and then patch all of these jobs at once:
# Code for filtering relevant resources, returning space separated list:
kubectl get jobs -n demo | awk 'BEGIN { ORS=" "}; {if ($1 ~ "batch-19") print $1}'
# Command for patching that list of resources:
kubectl patch jobs -n demo $(kubectl get jobs -n demo | awk 'BEGIN { ORS=" "}; {if ($1 ~ "batch-19") print $1}') --type=strategic --patch '{"spec":{"parallelism":4}}'