kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

Broken output schema running `kubectl apply --prune --dry-run=xyz -o yaml/json` with a single resource created.

Open cube2222 opened this issue 3 years ago • 6 comments

What happened?

When applying a single resource (and instructed to -o yaml/json) kubectl will print that object directly, instead of printing it in an "items" array - as it does when applying multiple objects.

However, if instructed to also --prune resources then it will print the pruned resources but won't take them into account in this output schema decision.

So it ends up printing the raw objects one after each other, which in the case of yaml means it prints a big object with duplicate fields (that's a "merge" of all the created and pruned resources). For json that means it will print the json objects one after the other, without nesting them in an array.

What did you expect to happen?

I'd expect it to print an array of items whenever it wants to print more than one object - whether those objects are created or pruned.

How can we reproduce it (as minimally and precisely as possible)?

Apply a resource of any kind (i.e. a secret). Then apply another different resource with --prune (while not applying the first one, so that it gets pruned). Use kubectl apply --dry-run=server -o yaml/json --prune.

Kubernetes version

$ kubectl version
v1.23.5

cube2222 avatar Apr 05 '22 13:04 cube2222

@apelisse any thoughts here?

eddiezane avatar Apr 06 '22 01:04 eddiezane

First, prune is not really supported (@seans) so I don't know how much we'll want to support this.

Also, what do you mean you would expect "an array of items"?

apelisse avatar Apr 06 '22 03:04 apelisse

/assign

Is dry-run=server supported?

seans3 avatar Apr 27 '22 16:04 seans3

Yep, that's how you ask things to the server. Somewhere we must build a list but we don't with pruning?

apelisse avatar Apr 27 '22 16:04 apelisse

/triage accepted

seans3 avatar Apr 29 '22 16:04 seans3

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 28 '22 16:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 27 '22 17:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Sep 26 '22 17:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 26 '22 17:09 k8s-ci-robot