1.0.0-rc4 Example for overriding the content-type header of patching operations do not work
Describe the bug The configuration passed into the patching APIs do not honor the middleware overrides. The generated code does not even look at the passed in configuration for the middlewares, it only looks at the middlewares of its member configuration object. e.g.
patchNamespacedStatefulSetWithHttpInfo(name, namespace, body, pretty, dryRun, fieldManager, fieldValidation, force, _options) {
const requestContextPromise = this.requestFactory.patchNamespacedStatefulSet(name, namespace, body, pretty, dryRun, fieldManager, fieldValidation, force, _options);
// build promise chain
let middlewarePreObservable = (0, rxjsStub_1.from)(requestContextPromise);
for (let middleware of this.configuration.middleware) {
middlewarePreObservable = middlewarePreObservable.pipe((0, rxjsStub_2.mergeMap)((ctx) => middleware.pre(ctx)));
}
** Client Version **
1.0.0-rc4
** Server Version **
1.25.1
To Reproduce Steps to reproduce the behavior:
- make a call to
patchNamespacedStatefulSetwith the configuration overriding with the middleware.
Expected behavior The middleware should be called to override the request.
** Example Code** see https://github.com/kubernetes-client/javascript/blob/62e5ab1701cb5659656f1941ef11eb748e626c25/examples/patch-example.js
Environment (please complete the following information):
- OS: [Ubuntu 22.04]
- NodeJS Version [18]
- Cloud runtime [NA]
Additional context Related to https://github.com/kubernetes-client/javascript/issues/1398
Workaround
return createConfiguration({
baseServer: baseServerConfig,
middleware: [mw],
authMethods: {
default: {
applySecurityAuthentication: async (req) => {
await mw.pre(req).toPromise();
await kc.applySecurityAuthentication(req);
}
},
},
});
I think that you will need to fix this in the upstream generator: https://github.com/OpenAPITools/openapi-generator/tree/master/modules/openapi-generator/src/main/resources/typescript-fetch
Once the fix is merged there, we can regenerate the client.
thanks @brendandburns , related to https://github.com/OpenAPITools/openapi-generator/issues/14549
The above issue was created a year ago, and given the repo's issue # keeps increasing and now at 4k, I don't have faith this will get addressed soon. I'm okay with my workaround for now, I'll keep this open so ppl can find the workaround here.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Can confirm this is still happening. Had to use the workaround mentioned on 1.0.0-rc6
https://github.com/nullplatform/k8s-lease-lock/commit/6b189fd6fa8f001f96d61a52f835cce691edff3a#diff-d8eee3b9c50488b328e4b9805b642354ccbaefadf361cbd9bd763925ed61ebeaR50
Can confirm that it was indeed a pain to work with. Is there any interest in documenting the workaround? For how small the PR was, there was a not insignificant amount of frustration. Is there a preferred way this could be handled, like passing in a patchType?
Anyhow, was nice to get rid of the 4 or 5 undefineds on the patch call.
/reopen /lifecycle frozen
@brendandburns: Reopened this issue.
In response to this:
/reopen /lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Note, not only middlwares but also httpApi are never used. So configuration param is generally broken.
In fact, I really feel configuration param is not easy to use.
This is my current workaround. It change the api signature, drop configuration, add { signal: AbortSignal } to support abortion which solve #1613 (by using AbortSignal.timeout). It should also possible to add headers.
import fetch from "node-fetch"
import { type ApiType, type KubeConfig, type Configuration, createConfiguration, ServerConfiguration, ResponseContext, wrapHttpLibrary } from "@kubernetes/client-node"
export type AbortableV1Api<T> = {
[K in keyof T]: T[K] extends (param: infer Param) => infer Return
? (param: Param, options?: { signal?: AbortSignal }) => Return : never
}
export function makeAbortableApiClient<T extends ApiType>(kubeConfig: KubeConfig, apiClientType: new (config: Configuration) => T) {
const cluster = kubeConfig.getCurrentCluster();
if (!cluster) {
throw new Error('No active cluster!');
}
const baseServer = new ServerConfiguration(cluster.server, {})
const httpApi = wrapHttpLibrary({
async send(request) {
console.log("send", request)
const signal = (request as any).signal
const response = await fetch(request.getUrl(), {
method: request.getHttpMethod(),
headers: request.getHeaders(),
body: request.getBody(),
agent: request.getAgent(),
signal,
})
return new ResponseContext(
response.status,
Object.fromEntries(response.headers.entries()),
{
text() {
return response.text()
},
async binary() {
return Buffer.from(await response.arrayBuffer())
},
},
)
},
})
const config = createConfiguration({ httpApi, authMethods: { default: kubeConfig } })
const api = new apiClientType(config)
const methodCache = new WeakMap()
return new Proxy(api, {
get(target, prop, receiver) {
const orig = Reflect.get(target, prop, receiver)
if (typeof orig != "function") return orig
if (methodCache.has(orig)) return methodCache.get(orig)
console.log("create method", orig.name)
const method = async function (this: any, ...args: any[]) {
args[1] = withSignal(args[1]?.signal)
// console.log(orig.name, args.length, args[1])
return Reflect.apply(orig, this, args)
}
Object.defineProperty(method, "name", { value: orig.name })
methodCache.set(orig, method)
return method
}
}) as AbortableV1Api<T>
function withSignal(signal: AbortSignal | null | undefined) {
return createConfiguration({
baseServer: {
makeRequestContext(endpoint, httpMethod) {
const req = baseServer.makeRequestContext(endpoint, httpMethod)
if (signal != null) {
(req as any).signal = signal
}
return req
},
},
})
}
}
Closing, as this should be fixed per https://github.com/kubernetes-client/javascript/pull/2332#issue-2950215219.