kubectl label output from file input with multiple objects does not include a document separator
What happened: When running "kubectl label" with file input containing multiple objects no document separator is included in the output. This means only the last object will be picked up to be labelled on the cluster (or passed on to an apply command etc).
What you expected to happen: Multiple objects labelled with a document separator in between.
How to reproduce it (as minimally and precisely as possible):
cat <<EOF > /tmp/cm_test.yaml
apiVersion: v1
items:
- apiVersion: v1
kind: ConfigMap
metadata:
name: test1
- apiVersion: v1
kind: ConfigMap
metadata:
name: test2
kind: List
metadata:
resourceVersion: ""
selfLink: ""
EOF
cat /tmp/cm_test.yaml | kubectl label testlabel=foo -o yaml -f - --local
Output:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
testlabel: foo
name: test1
apiVersion: v1 # No record separator here
kind: ConfigMap
metadata:
labels:
testlabel: foo
name: test2
Anything else we need to know?: The same process and data but using the "annotate" command instead of "label" works perfectly and includes document separators. as per output below.
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
testlabel: foo
name: test1
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
testlabel: foo
name: test2
Environment:
- Kubernetes client and server versions (use
kubectl version):
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5+70fb84c", GitCommit:"3c28e7a79b58e78b4c1dc1ab7e5f6c6c2d3aedd3", GitTreeState:"clean", BuildDate:"2022-04-25T15:58:12Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
Problem stems from that label initializes new printer for each object https://github.com/kubernetes/kubernetes/blob/10eb7092f854c71122c03752465e868bce23c0b6/staging/src/k8s.io/kubectl/pkg/cmd/label/label.go#L390 unlike to annotate.
/triage accepted /assign
Currently YAMLPrinter only adds document separator(---) if the object count is more than 1 for one YAMLPrinter object https://github.com/kubernetes/kubernetes/blob/e4fca6469022309bc0ace863fce73054a0219464/staging/src/k8s.io/cli-runtime/pkg/printers/yaml.go#L48.
However, for the command label(and for a couple of more commands), there is no one YAMLPrinter, instead to manage the message in printer object, YAMLPrinter is initialized per object https://github.com/kubernetes/kubernetes/blob/10eb7092f854c71122c03752465e868bce23c0b6/staging/src/k8s.io/kubectl/pkg/cmd/label/label.go#L390.
That causes each YAMLPrinter has it's count to 1. Therefore, they are not adding document separator.
This simply works for annotate command because print message is not being changed for annotate command(there is also an issue for that problem https://github.com/kubernetes/kubernetes/issues/110123) and printer is initialized once in completed.
I wonder would it be possible to add document separator in any case without checking the counter in here https://github.com/kubernetes/kubernetes/blob/e4fca6469022309bc0ace863fce73054a0219464/staging/src/k8s.io/cli-runtime/pkg/printers/yaml.go#L48
@eddiezane @brianpursley @soltysh
/unassign
Hi! I would like to take a look at this issue and see if I can help with it. /assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
It looks like PR #110124 is already addressing this issue and has been hardened and reviewed for a while now. I'm not really contributing to sig-cli anymore so I'll not look further into this. Unassigning and closing my PR so another person can tackle it!
/unassign
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten