kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

kubectl label output from file input with multiple objects does not include a document separator

Open DenverJ opened this issue 3 years ago • 9 comments

What happened: When running "kubectl label" with file input containing multiple objects no document separator is included in the output. This means only the last object will be picked up to be labelled on the cluster (or passed on to an apply command etc).

What you expected to happen: Multiple objects labelled with a document separator in between.

How to reproduce it (as minimally and precisely as possible):

cat <<EOF > /tmp/cm_test.yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: test1
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: test2
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
EOF
cat /tmp/cm_test.yaml | kubectl label testlabel=foo -o yaml -f - --local

Output:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    testlabel: foo
  name: test1
apiVersion: v1  # No record separator here
kind: ConfigMap
metadata:
  labels:
    testlabel: foo
  name: test2

Anything else we need to know?: The same process and data but using the "annotate" command instead of "label" works perfectly and includes document separators. as per output below.

apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    testlabel: foo
  name: test1
---
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    testlabel: foo
  name: test2

Environment:

  • Kubernetes client and server versions (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5+70fb84c", GitCommit:"3c28e7a79b58e78b4c1dc1ab7e5f6c6c2d3aedd3", GitTreeState:"clean", BuildDate:"2022-04-25T15:58:12Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

DenverJ avatar Aug 18 '22 02:08 DenverJ

Problem stems from that label initializes new printer for each object https://github.com/kubernetes/kubernetes/blob/10eb7092f854c71122c03752465e868bce23c0b6/staging/src/k8s.io/kubectl/pkg/cmd/label/label.go#L390 unlike to annotate.

/triage accepted /assign

ardaguclu avatar Aug 18 '22 07:08 ardaguclu

Currently YAMLPrinter only adds document separator(---) if the object count is more than 1 for one YAMLPrinter object https://github.com/kubernetes/kubernetes/blob/e4fca6469022309bc0ace863fce73054a0219464/staging/src/k8s.io/cli-runtime/pkg/printers/yaml.go#L48.

However, for the command label(and for a couple of more commands), there is no one YAMLPrinter, instead to manage the message in printer object, YAMLPrinter is initialized per object https://github.com/kubernetes/kubernetes/blob/10eb7092f854c71122c03752465e868bce23c0b6/staging/src/k8s.io/kubectl/pkg/cmd/label/label.go#L390.

That causes each YAMLPrinter has it's count to 1. Therefore, they are not adding document separator.

This simply works for annotate command because print message is not being changed for annotate command(there is also an issue for that problem https://github.com/kubernetes/kubernetes/issues/110123) and printer is initialized once in completed.

I wonder would it be possible to add document separator in any case without checking the counter in here https://github.com/kubernetes/kubernetes/blob/e4fca6469022309bc0ace863fce73054a0219464/staging/src/k8s.io/cli-runtime/pkg/printers/yaml.go#L48

@eddiezane @brianpursley @soltysh

/unassign

ardaguclu avatar Aug 18 '22 10:08 ardaguclu

Hi! I would like to take a look at this issue and see if I can help with it. /assign

jaehnri avatar Aug 24 '22 18:08 jaehnri

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 22 '22 19:11 k8s-triage-robot

/remove-lifecycle stale

ardaguclu avatar Nov 22 '22 20:11 ardaguclu

It looks like PR #110124 is already addressing this issue and has been hardened and reviewed for a while now. I'm not really contributing to sig-cli anymore so I'll not look further into this. Unassigning and closing my PR so another person can tackle it!

/unassign

jaehnri avatar Jan 28 '23 22:01 jaehnri

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot avatar Jan 28 '24 23:01 k8s-triage-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 27 '24 23:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 27 '24 23:05 k8s-triage-robot