JSON Patch is not implemented in patch_namespaced_custom_object
What happened (please include outputs or screenshots):
JSON Patch is not implemented in patch_namespaced_custom_object. When trying to pass JSON Patch lists as the body to patch_namespaced_custom_object, You'll get the following error:
<class 'kubernetes.client.exceptions.ApiException'>: (422) ││ Reason: Unprocessable Entity ││ HTTP response headers: HTTPHeaderDict({'Audit-Id': 'eb09ccb7-4389-4426-9335-5710fd4e280d', 'Cache-Control': 'no-cache, private', 'Content-Type': 'applic ││ ation/json', 'X-Kubernetes-Pf-Flowschema-Uid': 'a17bb138-39f2-47bd-9f64-afa2a8d965d9', 'X-Kubernetes-Pf-Prioritylevel-Uid': '6a07914b-de41-434d-9f5b-9fe ││ 24c420bee', 'Date': 'Mon, 10 Apr 2023 09:28:15 GMT', 'Content-Length': '842'}) ││ HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":" \"\" is invalid: patch: Invalid value: \"[{\\\"op\\\ ││ ":\\\"replace\\\",\\\"path\\\":\\\"/spec/values\\\",\\\"value\\\":8080}]\": couldn't get version/kind; json parse error: json: cannot unmarshal array in ││ to Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }","reason":"Invalid", ││ "details":{"causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"[{\\\"op\\\":\\\"replace\\\",\\\"path\\\":\\\"/spec/values\\\",\\\"value\ ││ \\":8080}]\": couldn't get version/kind; json parse error: json: cannot unmarshal array into Go value of type struct { APIVersion string \"json:\\\"apiV ││ ersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }","field":"patch"}]},"code":422}
This fails because the kubernetes python module attempts to patch is as a strategic merge instead of JSONPatch.
Other function, patch_namespaced_service, for example, works as expected.
Environment:
- Kubernetes version (
kubectl version): Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.10", GitCommit:"5c1d2d4295f9b4eb12bfbf6429fdf989f2ca8a02", GitTreeState:"clean", BuildDate:"2023-01-18T19:15:31Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.10+k3s1", GitCommit:"546a94e9ae1c3be6f9c0dcde32a6e6672b035bc8", GitTreeState:"clean", BuildDate:"2023-01-26T00:35:57Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"} - OS (e.g., MacOS 10.13.6): Windows 11
- Python version (
python --version) Python 3.11.1 - Python client version (
pip list | grep kubernetes) kubernetes 24.2.0
/assign @iTaybb
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Still happens
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten