JSON patches with test operation fail when patch body contains non-ascii characters or any of <,>,&
What happened (please include outputs or screenshots):
For example, the patch_namespaced_pod from the repro steps below fails with the following stack trace
Traceback (most recent call last):
File "test.py", line 8, in <module>
core_v1.patch_namespaced_pod("foo", "default", broken_patch) # breaks with unprocessable entity 422
File "/opt/python-modules/kubernetes/client/api/core_v1_api.py", line 19142, in patch_namespaced_pod
return self.patch_namespaced_pod_with_http_info(name, namespace, body, **kwargs) # noqa: E501
File "/opt/python-modules/kubernetes/client/api/core_v1_api.py", line 19267, in patch_namespaced_pod_with_http_info
collection_formats=collection_formats)
File "/opt/python-modules/kubernetes/client/api_client.py", line 353, in call_api
_preload_content, _request_timeout, _host)
File "/opt/python-modules/kubernetes/client/api_client.py", line 184, in __call_api
_request_timeout=_request_timeout)
File "/opt/python-modules/kubernetes/client/api_client.py", line 413, in request
body=body)
File "/opt/python-modules/kubernetes/client/rest.py", line 300, in PATCH
body=body)
File "/opt/python-modules/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (422)
Reason: Unprocessable Entity
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Length': '187', 'Content-Type': 'application/json', 'Date': 'Thu, 16 Nov 2023 12:26:53 GMT', 'X-Kubernetes-Pf-Flowschema-Uid': '21a41de5-66b7-42b7-9fa2-da0aec928b00', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'a234a632-f115-414a-b211-cabc4b8b0246'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server rejected our request due to an error in our request","reason":"Invalid","details":{},"code":422}
What you expected to happen: The "test" operation should go through when the content is the same regardless of the way it was encoded by serializing libraries.
How to reproduce it (as minimally and precisely as possible): An example follows that will try to do a json patch test on the annotations of a pod. First create the pod (image doesn't matter):
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: foo
annotations:
foo.example.com/bar: "<&>β"
foo.example.com/baz: "bad"
spec:
containers:
- name: foo
image: 127.0.0.1/no-such-image:no-such-tag
EOF
Then run this script
from kubernetes import client, config
core_v1 = client.CoreV1Api()
# reconfigure as needed
core_v1.api_client.configuration.host = "http://localhost:8483"
working_patch = [{"op":"test", "path":"/metadata/annotations/foo.example.com~1baz", "value":"bad"}]
core_v1.patch_namespaced_pod("foo", "default", working_patch) # completes successfully
broken_patch = [{"op":"test", "path":"/metadata/annotations/foo.example.com~1bar", "value":"<&>β"}]
core_v1.patch_namespaced_pod("foo", "default", broken_patch) # breaks with unprocessable entity 422
Anything else we need to know?:
- Kubernetes and client versions are outdated but it looks like latest master code will do the same
- This will be more painful when/if official support for json patches for custom resources is added: https://github.com/kubernetes-client/python/issues/2039
- I believe this is caused by a combination of python's
json.dumpsand golang'sjson.Marshaldifferent defaults (non-ascii escapes, escapes for <,>,&) and kube-apiserver comparing current document vs patch body byte to byte. More details in https://github.com/kubernetes/apiserver/issues/100#issue-1996717635. Probably it would be best if it is fixed in the API server itself but if that is not possible, the client library should probably adapt to the remote requirements - same kubectl patch will pass just fine - We have a hacky workaround that goes more or less as follows:
def wrap(request):
"""Wraps reques with a proxy that fixes encoding.
Intercepts requests, de-serializes the body and serializes it again,
making sure non-ascii codes are not escaped and json.Marshal's
special symbols are escaped.
"""
import json
replace_dict = {
"<": "\\u003c",
">": "\\u003e",
"&": "\\u0026",
"\u2028": "\\u2028",
"\u2029": "\\u2029",
}
def reencode(body):
payload = json.loads(body)
new_body = json.dumps(payload, ensure_ascii=False)
for k, v in replace_dict.items():
new_body = new_body.replace(k, v)
# return bytes instead of string to force utf-8 encoding on the request
return new_body.encode("utf-8")
def patched(*args, **kwargs):
try:
args[2] = reencode(args[2])
except IndexError:
kwargs["body"] = reencode(kwargs["body"])
return request(*args, **kwargs)
return patched
core_v1.api_client.rest_client.pool_manager.request = wrap(core_v1.api_client.rest_client.pool_manager.request)
Environment:
- Kubernetes version (
kubectl version):
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-1+1f8a47eae6d024-dirty", GitCommit:"1f8a47eae6d0246ddca52cf925d15e5f88ec876a", GitTreeState:"dirty", BuildDate:"2023-10-23T12:01:36Z", GoVersion:"go1.21.3 X:boringcrypto", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-1+1f8a47eae6d024-dirty", GitCommit:"1f8a47eae6d0246ddca52cf925d15e5f88ec876a", GitTreeState:"dirty", BuildDate:"2023-10-23T12:01:36Z", GoVersion:"go1.21.3 X:boringcrypto", Compiler:"gc", Platform:"linux/amd64"}
- OS: Photon 3 x86_64, Linux kernel 4.19.295-3.ph3
- Python version: Python 3.7.5
- Python client version: 17.17.0
Thanks for reporting the issue. It indeed looks like a encoding problem. The patch_namespaced_pod is generated code. I think the fix needs to happen in the upstream code openapi code generator: https://github.com/OpenAPITools/openapi-generator
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.