kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

kubectl edit or apply can not update .status when status sub resource is enabled

Open nightfury1204 opened this issue 6 years ago • 42 comments

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • minikube

What happened: I have a crd that has status subresource is enabled. When i edit the status of the crd using kubectl edit, the changes doesn't apply.

What you expected to happen: kubectl edit should apply the changes in status field.

How to reproduce it (as minimally and precisely as possible):

$ cat customresourcedefination.yaml 
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: foos.try.com
spec:
  group: try.com
  version: v1alpha1
  scope: Namespaced
  subresources:
    status: {}
  names:
    plural: foos
    singular: foo
    kind: Foo

$ kubectl apply -f customresourcedefination.yaml
$ cat foo.yaml 
apiVersion: try.com/v1alpha1
kind: Foo
metadata:
  name: my-foo
status:
  hello: world

$ kubectl apply -f foo.yaml
# edit the status
$ kubectl edit foo/my-foo

Anything else we need to know: If status subresource is disabled for crd, then kubectl edit works fine.

nightfury1204 avatar Nov 21 '18 06:11 nightfury1204

/kind bug /sig cli

tamalsaha avatar Nov 21 '18 07:11 tamalsaha

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Feb 19 '19 08:02 fejta-bot

Related to https://github.com/kubernetes/kubernetes/issues/60845

flyer103 avatar Mar 07 '19 08:03 flyer103

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Apr 06 '19 08:04 fejta-bot

/remove-lifecycle rotten

coderanger avatar Apr 23 '19 18:04 coderanger

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jul 22 '19 18:07 fejta-bot

/remove-lifecycle stale /area kubectl

seans3 avatar Jul 22 '19 20:07 seans3

/priority P2

seans3 avatar Jul 22 '19 20:07 seans3

I guess this is intended behavior according to the design proposal :

If the /status subresource is enabled, the following behaviors change:

  • The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources).

DBarthe avatar Jul 26 '19 13:07 DBarthe

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Oct 24 '19 14:10 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Nov 23 '19 15:11 fejta-bot

/remove-lifecycle rotten

florianrusch avatar Nov 27 '19 06:11 florianrusch

From @DBarthe

I guess this is intended behavior according to the design proposal :

If the /status subresource is enabled, the following behaviors change:

  • The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources).

Wouldn't it be nice if we could change the status field in some kind with kubectl? Maybe with an extra command like: kubectl edit foo/status my-foo or kubectl edit foo.status my-foo. I'm not sure if there is any convention on how this kind of commands should look like.

Our use-case: We've built an operator (shell-operator) for our CRDs and would like to edit the status field with kubectl within this operator.

florianrusch avatar Nov 27 '19 06:11 florianrusch

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Feb 25 '20 07:02 fejta-bot

It would certainly be nice to be able to do this with kubectl rather than kludge it with curl.

atykhyy avatar Feb 29 '20 15:02 atykhyy

/lifecycle frozen /remove-lifecycle stale /remove-lifecycle rotten

thockin avatar Mar 11 '20 16:03 thockin

I guess this is intended behavior according to the design proposal :

If the /status subresource is enabled, the following behaviors change:

  • The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources).

Closing as this is intended behavior.

seans3 avatar Apr 29 '20 16:04 seans3

/close

seans3 avatar Apr 29 '20 16:04 seans3

@seans3: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 29 '20 16:04 k8s-ci-robot

I think the issue here is that there seems to be no way to use kubectl to edit a status?

E.g something like kubectl edit pod --status foo or kubectl edit pod --subresource=status or something like that.

On Wed, Apr 29, 2020 at 9:24 AM Kubernetes Prow Robot [email protected] wrote:

Closed #564.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

thockin avatar Apr 29 '20 16:04 thockin

I hate to be That Guy that reopens issues, but this is a major gap for manual testing and scripting.

thockin avatar Apr 29 '20 16:04 thockin

/reopen

thockin avatar Apr 29 '20 16:04 thockin

@thockin: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 29 '20 16:04 k8s-ci-robot

/remove-kind bug /kind feature

I agree this is a really annoying gap. Adding --subresource support for get/edit/apply/patch commands would make sense to me.

xref an attempt to do this at https://github.com/kubernetes/kubernetes/pull/60902 and a need for it at https://github.com/kubernetes/kubernetes/issues/15858#issuecomment-624686815

liggitt avatar May 06 '20 14:05 liggitt

this seems higher priority than P2 to me

liggitt avatar May 06 '20 14:05 liggitt

/remove-priority P2 /priority P1

seans3 avatar May 06 '20 16:05 seans3

SIG CLI meeting on May 20th agreed to move forward on this and prioritize it. As work on this progresses, we'd like to get feedback from the larger community.

/assign /assign @eddiezane

seans3 avatar May 20 '20 16:05 seans3

@seans3: GitHub didn't allow me to assign the following users: eddiezane.

Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide

In response to this:

SIG CLI meeting on May 20th agreed to move forward on this and prioritize it. As work on this progresses, we'd like to get feedback from the larger community.

/assign /assign @eddiezane

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar May 20 '20 16:05 k8s-ci-robot

/assign

eddiezane avatar May 20 '20 17:05 eddiezane

Looking to see if there are any updates on this? It looks like the related linked issues/PRs are closed and wanted to see if there is any additional information around the plans for this.

detiber avatar Jun 30 '20 13:06 detiber