Broken output schema running `kubectl apply --prune --dry-run=xyz -o yaml/json` with a single resource created.
What happened?
When applying a single resource (and instructed to -o yaml/json) kubectl will print that object directly, instead of printing it in an "items" array - as it does when applying multiple objects.
However, if instructed to also --prune resources then it will print the pruned resources but won't take them into account in this output schema decision.
So it ends up printing the raw objects one after each other, which in the case of yaml means it prints a big object with duplicate fields (that's a "merge" of all the created and pruned resources). For json that means it will print the json objects one after the other, without nesting them in an array.
What did you expect to happen?
I'd expect it to print an array of items whenever it wants to print more than one object - whether those objects are created or pruned.
How can we reproduce it (as minimally and precisely as possible)?
Apply a resource of any kind (i.e. a secret). Then apply another different resource with --prune (while not applying the first one, so that it gets pruned). Use kubectl apply --dry-run=server -o yaml/json --prune.
Kubernetes version
$ kubectl version
v1.23.5
@apelisse any thoughts here?
First, prune is not really supported (@seans) so I don't know how much we'll want to support this.
Also, what do you mean you would expect "an array of items"?
/assign
Is dry-run=server supported?
Yep, that's how you ask things to the server. Somewhere we must build a list but we don't with pruning?
/triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.