Support for CRUD sub resources in the Client
Currently, I do not see a way to use subresources from the client.
I suggest that we add a variadic argument for subresources. example: https://github.com/kubernetes/client-go/blob/master/dynamic/interface.go#L32
Or we may need to add the subresources to the Options for functions that already have a variadic?
There is limited support for the status subresource via https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/client/client.go#L143
I have a similar use case here that needs to evict pods instead of simply deleting them, using the eviction subresource of Pod.
/kind feature
We had some initial ideas for how to design this, but I think the interface needs potential a bit more work. For instance, should we do client.Subresource("name").Update(obj)? Can we discover name from the object? (maybe?). Thoughts and usecases always welcome :-)
So your thinking that we have a subresoureces Client, that has an interface containing Create, Update, Delete and Get that look exactly the same as the current interface?
I don't understand how we could determine the subresources for the object, could you explain what you had in mind would love to look into this.
I am assuming that we would want to keep the StatusWriter because it is so common that the helper makes sense?
Depending on the subresource, we could use a RESTMapper to convert a Kind to the corresponding subresource entry in discovery, IIRC, but I haven't confirmed that -- it's just an initial thought off the top of my head. This doesn't work with subresources like status, though (even though we deal with status separately). I'd be curious to see what seemed more readable or ergonomic.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Is this issue looking for help? I have a need for doing the client supporting subresources such as get pods/log
yeah. In general, anything in CR that doesn't have someone working on it is open for help -- I tend to apply the help-wanted tag a bit less in cases where the design might be a bit thorny or that might require serious apimachinery spelunking, but if you're up for some back and forth on design, I'm happy to have someone working on it :-)
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen
+1 to this. Is there a quick way to document that this isn't supported, or return a 5xx when one tries to do so against envtest? Kubebuilder (which uses envtest) suggests users update status via err = r.Status().Update(context.Background(), instance). My controller was failing a new test I wrote because the Update was returning 404, which was really perplexing.
@cadedaniel I believe that the status subresource should work. If you're getting a 404 return code I would double-check if your CRD has enabled the Status subresource.
namely, if you're getting 404, make sure that you've done // +kubebuilder:subresource:status on the type definition for your CRD, so you get a CRD with the status subresource enabled.
Yes, I must have missed this. Thanks, ignore my earlier comment please.
/help
@DirectXMan12: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/kind design
Status is working, are there any plans to include Scale subresource as well? I'd like to control it from the client the same way it could be done via client-go.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale Any updates on this? If there is a plan for how this should be implemented or we can agree on a design, I might be able to help with the implementation :)
@tim-ebert We're waiting for a design proposal, see under designs/ for a template and examples
heads up for anyone deciding to tackle this, versioning gets weird with scale: https://github.com/kubernetes/client-go/blob/36233866f1c7c0ad3bdac1fc466cb5de3746cfa2/scale/client.go#L182-L201
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Is this issue looking for help? I have a need for doing the client supporting subresources such as get pods/log
Those are not CRUD subresources, see https://github.com/kubernetes-sigs/controller-runtime/issues/452 for that
@DirectXMan12 What is the concern with adding a simple ForSubresource(name string) Create/Patch option for this? Seems pretty straightforward and doesn't require any external API changes I think. It obviously allows for misuse if ppl use it for non-existing subresources but I think that should be ok if we add a warning to it?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten