kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

Support patch with selector

Open mbrancato opened this issue 5 years ago • 22 comments

What would you like to be added: I came across a use-case where I would like to patch multiple things at once. Without looking I did:

$ kubectl patch nodes ...
error: resource(s) were provided, but no name, label selector, or --all flag specified

So seeing that error, I quickly remedied it by adding a selector as the error says I should do:

$ kubectl patch nodes --selector='mylabel' ...
Error: unknown flag: --selector

Well, interesting.

Why is this needed: This is obviously possible in a two step process with get nodes and a loop, but it would seem nice to be able to patch multiple things using a selector.

mbrancato avatar Aug 14 '20 01:08 mbrancato

This error is coming from the generic resource builder here and here.

I'm not sure if a conscious decision was made to not support label selectors or --all with patch. If so we might want to change that error to avoid confusion.

@mbrancato to help drill down the action here what's your use case for patching multiple nodes?

/triage needs-information

eddiezane avatar Aug 14 '20 20:08 eddiezane

@eddiezane I wanted to remove labels on multiple nodes, but I guess this applies to anyone wanting to relabel an existing resource.

mbrancato avatar Aug 14 '20 20:08 mbrancato

@mbrancato the kubectl label command allows you to remove labels with the syntax of foo- to remove the label foo. This works with label selectors and --all.

 ~ kubectl label nodes foo=bar --all
node/pikube-0 labeled
node/pikube-2 labeled
node/pikube-1 labeled
 ~ kubectl get nodes -l foo=bar
NAME       STATUS   ROLES    AGE    VERSION
pikube-0   Ready    master   237d   v1.18.6+k3s1
pikube-2   Ready    <none>   237d   v1.18.6+k3s1
pikube-1   Ready    <none>   237d   v1.18.6+k3s1
 ~ kubectl label nodes foo- --all
node/pikube-0 labeled
node/pikube-2 labeled
node/pikube-1 labeled
 ~ kubectl get nodes -l foo=bar
No resources found.
 ~ kubectl label nodes pikube-0 foo=bar
node/pikube-0 labeled
 ~ kubectl get nodes -l foo=bar
NAME       STATUS   ROLES    AGE    VERSION
pikube-0   Ready    master   237d   v1.18.6+k3s1
 ~ kubectl label nodes -l foo=bar foo-
node/pikube-0 labeled
 ~ kubectl get nodes -l foo=bar
No resources found.

It's tucked away as the last example of the help text. We could probably make that more clear.

Does that solve your use case?

eddiezane avatar Aug 15 '20 06:08 eddiezane

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Nov 13 '20 07:11 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Dec 13 '20 07:12 fejta-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

fejta-bot avatar Jan 12 '21 08:01 fejta-bot

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 12 '21 08:01 k8s-ci-robot

This is still needed. I run into this yesterday when I wanted to clear conditions with a patch for all instances of my CRD.

/reopen /sig cli

tnozicka avatar Jan 12 '21 11:01 tnozicka

@tnozicka: Reopened this issue.

In response to this:

This is still needed. I run into this yesterday when I wanted to clear conditions with a patch for all instances of my CRD.

/reopen /sig cli

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 12 '21 11:01 k8s-ci-robot

/remove-lifecycle rotten

tnozicka avatar Jan 12 '21 11:01 tnozicka

I found this wanting to mass remove finalizers in order to clean out a namespace.

joejulian avatar Mar 18 '21 20:03 joejulian

/triage accepted

eddiezane avatar May 26 '21 16:05 eddiezane

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 24 '21 17:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 23 '21 17:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Oct 23 '21 17:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 23 '21 17:10 k8s-ci-robot

still very much needed

/reopen /assign

tnozicka avatar Nov 01 '23 14:11 tnozicka

@tnozicka: Reopened this issue.

In response to this:

still very much needed

/reopen /assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 01 '23 14:11 k8s-ci-robot

I have a PR ready in https://github.com/kubernetes/kubernetes/pull/121673 /remove-lifecycle stale

tnozicka avatar Nov 01 '23 16:11 tnozicka

/remove-lifecycle rotten

tnozicka avatar Nov 01 '23 16:11 tnozicka

/remove-triage needs-information (already marked as triage/accepted)

tnozicka avatar Nov 01 '23 16:11 tnozicka

I know there is a pending pull request here, but posting my hacky solution for those looking to have a workaround in the meantime. In my case I had about 50 jobs starting with batch-19 that I wanted to patch the number of parallel pods permitted. I get all the jobs, use awk to filter by the batch number and then patch all of these jobs at once:

# Code for filtering relevant resources, returning space separated list:
kubectl get jobs -n demo | awk 'BEGIN { ORS=" "}; {if ($1 ~ "batch-19") print $1}'

# Command for patching that list of resources:
kubectl patch jobs -n demo $(kubectl get jobs -n demo | awk 'BEGIN { ORS=" "}; {if ($1 ~ "batch-19") print $1}') --type=strategic --patch '{"spec":{"parallelism":4}}'

taylorpaul avatar Jul 29 '24 20:07 taylorpaul