kubectl
kubectl copied to clipboard
kubectl edit or apply can not update .status when status sub resource is enabled
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- minikube
What happened:
I have a crd that has status subresource is enabled. When i edit the status of the crd using kubectl edit
, the changes doesn't apply.
What you expected to happen:
kubectl edit
should apply the changes in status field.
How to reproduce it (as minimally and precisely as possible):
$ cat customresourcedefination.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: foos.try.com
spec:
group: try.com
version: v1alpha1
scope: Namespaced
subresources:
status: {}
names:
plural: foos
singular: foo
kind: Foo
$ kubectl apply -f customresourcedefination.yaml
$ cat foo.yaml
apiVersion: try.com/v1alpha1
kind: Foo
metadata:
name: my-foo
status:
hello: world
$ kubectl apply -f foo.yaml
# edit the status
$ kubectl edit foo/my-foo
Anything else we need to know:
If status subresource is disabled for crd, then kubectl edit
works fine.
/kind bug /sig cli
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Related to https://github.com/kubernetes/kubernetes/issues/60845
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale /area kubectl
/priority P2
I guess this is intended behavior according to the design proposal :
If the /status subresource is enabled, the following behaviors change:
- The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
From @DBarthe
I guess this is intended behavior according to the design proposal :
If the /status subresource is enabled, the following behaviors change:
- The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources).
Wouldn't it be nice if we could change the status field in some kind with kubectl
? Maybe with an extra command like: kubectl edit foo/status my-foo
or kubectl edit foo.status my-foo
. I'm not sure if there is any convention on how this kind of commands should look like.
Our use-case:
We've built an operator (shell-operator) for our CRDs and would like to edit the status field with kubectl
within this operator.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
It would certainly be nice to be able to do this with kubectl rather than kludge it with curl.
/lifecycle frozen /remove-lifecycle stale /remove-lifecycle rotten
I guess this is intended behavior according to the design proposal :
If the /status subresource is enabled, the following behaviors change:
- The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources).
Closing as this is intended behavior.
/close
@seans3: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I think the issue here is that there seems to be no way to use kubectl to edit a status?
E.g something like kubectl edit pod --status foo
or kubectl edit pod --subresource=status
or something like that.
On Wed, Apr 29, 2020 at 9:24 AM Kubernetes Prow Robot [email protected] wrote:
Closed #564.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
I hate to be That Guy that reopens issues, but this is a major gap for manual testing and scripting.
/reopen
@thockin: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-kind bug /kind feature
I agree this is a really annoying gap. Adding --subresource
support for get/edit/apply/patch commands would make sense to me.
xref an attempt to do this at https://github.com/kubernetes/kubernetes/pull/60902 and a need for it at https://github.com/kubernetes/kubernetes/issues/15858#issuecomment-624686815
this seems higher priority than P2 to me
/remove-priority P2 /priority P1
SIG CLI meeting on May 20th agreed to move forward on this and prioritize it. As work on this progresses, we'd like to get feedback from the larger community.
/assign /assign @eddiezane
@seans3: GitHub didn't allow me to assign the following users: eddiezane.
Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
SIG CLI meeting on May 20th agreed to move forward on this and prioritize it. As work on this progresses, we'd like to get feedback from the larger community.
/assign /assign @eddiezane
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign
Looking to see if there are any updates on this? It looks like the related linked issues/PRs are closed and wanted to see if there is any additional information around the plans for this.