[release-1.x] Patch namespace secret trigger an error due to unsupported media type
Describe the bug
When trying to use patchNamespacedSecret method on release-1.x branch the following error is raised:
Error: None of the given media types are supported: application/json-patch+json, application/merge-patch+json, application/strategic-merge-patch+json, application/apply-patch+yaml
at ObjectSerializer.getPreferredMediaType (ObjectSerializer.js:1760:1)
at CoreV1ApiRequestFactory.patchNamespacedSecret (CoreV1Api.js:8653:1)
at ObservableCoreV1Api.patchNamespacedSecret (ObservableAPI.js:9211:1)
at ObjectCoreV1Api.patchNamespacedSecret (ObjectParamAPI.js:2661:1)
//...
** Client Version ** release-1.x
** Server Version ** 1.24.3
To Reproduce Sample code triggering the error:
const coreV1 = new CoreV1Api(config);
coreV1.patchNamespacedSecret(
{
namespace: "default",
name: "my-secret",
body: {"tls.crt": "newValue"},
}
)
Also kind of related to this issue, I think we are missing a way to instruct patchNamespacedSecret which patch strategy we expect it to use. For example in the above example I expect it to use application/merge-patch+json but have no way right now to provide the patch strategy.
cc @davidgamero
@JBWatenbergScality thanks for letting us know, did you have this working on the 0.x version prior?
I think you are correct that it would require a custom Content-Type header, let me check if we have a supported pattern for this.
From a quick look through it seems like some of the media-types are missing in this list, but you may be able to supply one through the options parameter or using the customObjectsAPII
I had a similar problem with patching cronjobs and I was able to add headers via the options argument and get it working. For example:
const result = await batchV1Api.patchNamespacedCronJob(
name, ns, body, undefined, undefined, undefined, undefined, undefined,
{
headers: {
'Content-Type': 'application/merge-patch+json'
}
});
I am currently working on fixing these outdated examples and getting the same. Notably on readNamespacedDeployment and createNamespace. I might go ahead and push up what I got so we have some easy to recreate examples.
I'm taking a look at PatchNamespacedPod, and ran into an issue where supplying a new middleware wipes the configuration instead of merging it, resulting in an empty base server URL, but i've been trying out something like this:
const headerPatchMiddleware = new PromiseMiddlewareWrapper({
pre: async (requestContext: RequestContext) => {
requestContext.setHeaderParam("Content-type", PatchUtils.PATCH_FORMAT_JSON_PATCH);
return requestContext;
},
post: async (responseContext: ResponseContext) => responseContext
})
and then passing that in the options.middleware array
@davidgamero is there a reason we are still generating the client with TYPESCRIPT="${GEN_ROOT}/openapi/typescript.sh" instead of TYPESCRIPT="${GEN_ROOT}/openapi/typescript-fetch.sh"? Would this have anything to do with this issue? We have an item in FETCH_MIGRATION to Switch generate-client script to use typescript-fetch but I don't see where we have done that.
@clintonmedbery good point that item should be updated for clarity- the generator we are using is the new typescript generator that is still in development with node-fetch selected as the framework config option
it's an attempt to merge the various typescript client generators, which could hopefully one day give us an easy upgrade path to native fetch and potentially supporting other frameworks
we should probably update that in the main thread for clarity since right now it's only in the migration doc
i see 2 related issues here: we don't have a clear pattern or example for providing a custom patch strategy (this should be doable with the current requestContext middleware), and that some generated code has missing media types. Both appear to come from a disconnect in how the typescript generator creates the allowlist of media types and the swagger spec
I was able to get patch operations working in 1.0.0-rc3 with a minor edit to the generated ObjectSerializer.js. The code that needs to be changed is in https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/main/resources/typescript/model/ObjectSerializer.mustache, so I've opened a PR on that project to support the additional media types: https://github.com/OpenAPITools/openapi-generator/pull/16386
Edit: I did try to fix this with middleware but debugging the operation proved that the operation was failing before it even got to my middleware. Issue is that ObjectSerializer doesn't support any of the the candidate media types for patching.
@dleehr would your middleware solution work now if we would use the updated code from the openapi-generator which was merged recently?
@mstruebing The middleware change is actually unnecessary. The client is already determining what the Content-Type should be for the request (an enhancement over 0.x), but the generated ObjectSerializer was blocking that. Updating the generated ObjectSerializer code was the only fix needed for 1.0.0-rc3. So I think regenerating with openapi-generator will fix it here.
I hand-patched node_modules/@kubernetes/client-node/dist/gen/models/ObjectSerializer.js after installing 1.0.0-rc3 and now I'm able to successfully patchNamespacedSecret, patchNamespacedServiceAccount, etc.
I believe in the past we supplied a custom ObjectSerializer, but I agree that it's cleaner to just contribute to the generator instead.
Looking forward to the next rc since this should unblock a ton of routes!
@dleehr Thanks so much for your great job. The fix for Error: None of the given media types are supported: application/json-patch+json, application/merge-patch+json, application/strategic-merge-patch+json, application/apply-patch+yaml on patchNamespacedSecret has been in master.
Is there ETA for the available of 1.0.0-rc4 with this fix?
Thansk a lot.
By the way, is there any workaournd for this error on patchNamespacedSecret ?
@brendandburns it'd be great if we can get a new rc when you have time please?
@guomeng306 unfortunately there isnt a workaround in the current rc afaik without editing the generated object serializer to include the needed media types like @dleehr did. Im happy to point you to those changes,but we should have a new rc for you soon with the fix!
@davidgamero understood. thanks so much. By the way, is there ETA for the available of 1.0.0-rc4 with this fix? e.g. possible in next two weeks? :)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
is there a workaround for this, yet? I ran into this issue with v0.20.0.
@jeromy-cannon this issue is from the 1.x releases, which included a workaround fix in 1.0.0-rc4
if you are experiencing this on a v0.x.x release, we may want to open a separate issue for tracking it as the 0.x and 1.x working branches are separate.
iirc one 0.x release approach to this issue was the KubernetesObjectAPI which doesn't include all the same functionality yet on the 1.x branch, but should allow handling patch media types on 0.x releases
also, it would help for repro if you could include the exact media type and what method call you are expecting to use- is it application/merge-patch+json as mentioned in the start of the thread?
@davidgamero , thank you for the response. Great news on the workaround fix in 1.0.0-rc4. As a workaround I just did a get/delete/(update the payload from the get)/insert. Thank you for letting me know about the KubernetesObjectAPI, I might need to use that strategy in the future.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I've run into this same issue and resolved it for now by using 1.0.0-rc4 - but I'm curious if anyone knows what the status of this release candidate is? How close is it to being cut as the official v1?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.