javascript icon indicating copy to clipboard operation
javascript copied to clipboard

1.0.0-rc4 Example for overriding the content-type header of patching operations do not work

Open tugtugtug opened this issue 1 year ago • 11 comments

Describe the bug The configuration passed into the patching APIs do not honor the middleware overrides. The generated code does not even look at the passed in configuration for the middlewares, it only looks at the middlewares of its member configuration object. e.g.

    patchNamespacedStatefulSetWithHttpInfo(name, namespace, body, pretty, dryRun, fieldManager, fieldValidation, force, _options) {
        const requestContextPromise = this.requestFactory.patchNamespacedStatefulSet(name, namespace, body, pretty, dryRun, fieldManager, fieldValidation, force, _options);
        // build promise chain
        let middlewarePreObservable = (0, rxjsStub_1.from)(requestContextPromise);
        for (let middleware of this.configuration.middleware) {
            middlewarePreObservable = middlewarePreObservable.pipe((0, rxjsStub_2.mergeMap)((ctx) => middleware.pre(ctx)));
        }

** Client Version ** 1.0.0-rc4

** Server Version ** 1.25.1

To Reproduce Steps to reproduce the behavior:

  • make a call to patchNamespacedStatefulSet with the configuration overriding with the middleware.

Expected behavior The middleware should be called to override the request.

** Example Code** see https://github.com/kubernetes-client/javascript/blob/62e5ab1701cb5659656f1941ef11eb748e626c25/examples/patch-example.js

Environment (please complete the following information):

  • OS: [Ubuntu 22.04]
  • NodeJS Version [18]
  • Cloud runtime [NA]

Additional context Related to https://github.com/kubernetes-client/javascript/issues/1398

Workaround

		return createConfiguration({
			baseServer: baseServerConfig,
			middleware: [mw],
			authMethods: {
				default: {
					applySecurityAuthentication: async (req) => {
						await mw.pre(req).toPromise();
						await kc.applySecurityAuthentication(req);
					}
				},
			},
		});

tugtugtug avatar Jan 04 '24 21:01 tugtugtug

I think that you will need to fix this in the upstream generator: https://github.com/OpenAPITools/openapi-generator/tree/master/modules/openapi-generator/src/main/resources/typescript-fetch

Once the fix is merged there, we can regenerate the client.

brendandburns avatar Jan 06 '24 18:01 brendandburns

thanks @brendandburns , related to https://github.com/OpenAPITools/openapi-generator/issues/14549

tugtugtug avatar Jan 09 '24 15:01 tugtugtug

The above issue was created a year ago, and given the repo's issue # keeps increasing and now at 4k, I don't have faith this will get addressed soon. I'm okay with my workaround for now, I'll keep this open so ppl can find the workaround here.

tugtugtug avatar Jan 18 '24 15:01 tugtugtug

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 17 '24 16:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 17 '24 16:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jun 16 '24 16:06 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jun 16 '24 16:06 k8s-ci-robot

Can confirm this is still happening. Had to use the workaround mentioned on 1.0.0-rc6

https://github.com/nullplatform/k8s-lease-lock/commit/6b189fd6fa8f001f96d61a52f835cce691edff3a#diff-d8eee3b9c50488b328e4b9805b642354ccbaefadf361cbd9bd763925ed61ebeaR50

Can confirm that it was indeed a pain to work with. Is there any interest in documenting the workaround? For how small the PR was, there was a not insignificant amount of frustration. Is there a preferred way this could be handled, like passing in a patchType?

Anyhow, was nice to get rid of the 4 or 5 undefineds on the patch call.

impatient avatar Aug 13 '24 17:08 impatient

/reopen /lifecycle frozen

brendandburns avatar Aug 14 '24 14:08 brendandburns

@brendandburns: Reopened this issue.

In response to this:

/reopen /lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 14 '24 14:08 k8s-ci-robot

Note, not only middlwares but also httpApi are never used. So configuration param is generally broken.

In fact, I really feel configuration param is not easy to use.

This is my current workaround. It change the api signature, drop configuration, add { signal: AbortSignal } to support abortion which solve #1613 (by using AbortSignal.timeout). It should also possible to add headers.

import fetch from "node-fetch"

import { type ApiType, type KubeConfig, type Configuration, createConfiguration, ServerConfiguration, ResponseContext, wrapHttpLibrary } from "@kubernetes/client-node"

export type AbortableV1Api<T> = {
	[K in keyof T]: T[K] extends (param: infer Param) => infer Return
		? (param: Param, options?: { signal?: AbortSignal }) => Return : never
}

export function makeAbortableApiClient<T extends ApiType>(kubeConfig: KubeConfig, apiClientType: new (config: Configuration) => T) {

	const cluster = kubeConfig.getCurrentCluster();
	if (!cluster) {
		 throw new Error('No active cluster!');
	}
	const baseServer = new ServerConfiguration(cluster.server, {})

	const httpApi = wrapHttpLibrary({
		async send(request) {
			console.log("send", request)
			const signal = (request as any).signal
			const response = await fetch(request.getUrl(), {
				method: request.getHttpMethod(),
				headers: request.getHeaders(),
				body: request.getBody(),
				agent: request.getAgent(),
				signal,
			})
			return new ResponseContext(
				response.status,
				Object.fromEntries(response.headers.entries()),
				{
					text() {
						return response.text()
					},
					async binary() {
						return Buffer.from(await response.arrayBuffer())
					},
				},
			)
		},
	})

	const config = createConfiguration({ httpApi, authMethods: { default: kubeConfig } })
	const api = new apiClientType(config)

	const methodCache = new WeakMap()
	return new Proxy(api, {
		get(target, prop, receiver) {
			const orig = Reflect.get(target, prop, receiver)
			if (typeof orig != "function") return orig
			if (methodCache.has(orig)) return methodCache.get(orig)
			console.log("create method", orig.name)
			const method = async function (this: any, ...args: any[]) {
				args[1] = withSignal(args[1]?.signal)
				// console.log(orig.name, args.length, args[1])
				return Reflect.apply(orig, this, args)
			}
			Object.defineProperty(method, "name", { value: orig.name })
			methodCache.set(orig, method)
			return method
		}
	}) as AbortableV1Api<T>

	function withSignal(signal: AbortSignal | null | undefined) {
		return createConfiguration({
			baseServer: {
				makeRequestContext(endpoint, httpMethod) {
					const req = baseServer.makeRequestContext(endpoint, httpMethod)
					if (signal != null) {
						(req as any).signal = signal
					}
					return req
				},
			},
		})
	}
}

hax avatar Sep 13 '24 03:09 hax

Closing, as this should be fixed per https://github.com/kubernetes-client/javascript/pull/2332#issue-2950215219.

cjihrig avatar Apr 16 '25 02:04 cjihrig