python
python copied to clipboard
content-type for patch commands
Hi all.
In this PR I'd like to discuss possible solution for problems with selecting content-type when K8s' objects are patching. This problem is nicely documented in https://github.com/kubernetes-client/python/issues/866 and https://github.com/tomplus/kubernetes_asyncio/issues/68
This idea is to introduce a new parameter in ApiClient to be able to control a return from the select_header_content_type
method and remove the problematic detection from rest.py. Optionally we can add this as an argument to each patch method.
It'll be breaking change if an application uses patch_* methods.
Please take a look. If we accept the solution I prepare PR in openapi-generator and temporary patch for this repo.
cc: @roycaihw @yliaog @HaraldGustafsson @nolar
Thanks.
@roycaihw Could you take a look at the last concept? I'd like to prepare generator to manage with it. Thanks.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Any update on this? this would be really handy for me.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Edit: Sorry there was a typo in my first message
/remove-lifecycle stale
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: tomplus
To complete the pull request process, please assign yliaog after the PR has been reviewed.
You can assign the PR to them by writing /assign @yliaog
in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
I'm back to this PR. I had to squash previous commits and rebase with the master branch (it was very old, before openapi-generator). I hope it helps to discuss the idea. This PR won't be merge because some changes have to be added to the generator. It's like an example how it will look like.
Thanks.
Hi, is there any update here?
regards
It would be really nice if someone could look at @tomplus's work. I seem to have to monkey patch around this every time i want to use this library.
So I read the last comment from the author as being about getting feedback from upstream about the direction/approach? My bad 😄
@krichter722 It's marked WIP because it shouldn't be merged in this form. I'm waiting for reviewing this PR to prepare changes in OpenAPI Generator
cc: @roycaihw
@tomplus: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@tomplus: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I know it needs rebase but it's only to show and discuss the idea. At the end the generator will be extended.
@yliaog @roycaihw please take a look.
/hold
Thanks for putting together this! It's a huge improvement to this client and can also benefit other generated clients. LGTM
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Hi team. Can you guys prioritize this PR ? It's more than 2 years since this got opened and I'm keen to use this improvement.
@tadrian88 this PR is for discussing the idea, it cannot be merged as is, i think.
My update - Openapi-generator already has required changes (PR #10686 & 10978) in the legacy and new python generator. It is in the master branch and will be released in v5.3.1. We can continue work on it when we switch to the latest version of generator - it's addressed here https://github.com/kubernetes-client/python/issues/1589.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten