UnicodeDecodeError with method read_namespaced_pod_log()
What happened (please include outputs or screenshots):
While using the following pod_logs = v1.read_namespaced_pod_log(name=pod_name, namespace=namespace). I receive the error:
File "/opt/app/batch/python_virt/venv_app/lib64/python3.11/site-packages/kubernetes/client/rest.py", line 229, in request
r.data = r.data.decode('utf8')
^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 3793: invalid continuation byte
What you expected to happen: The pod logs are displayed correctly.
How to reproduce it (as minimally and precisely as possible): Use pods where the output is not utf-8. In my case it's latin-1.
Anything else we need to know?:
There are legacy applications where the output has to be in a special charset. Therefore, it would be great when the method read_namespaced_pod_log from the kubernetes.client.CoreV1Api() could have a parameter charset, where the developer can choose a desired charset. For example:
pod_logs = v1.read_namespaced_pod_log(name=pod_name, namespace=namespace, charset='latin1')
As stated above at the moment the charset utf-8 is hardcoded in: https://github.com/kubernetes-client/python/blob/68d5a1479e7d735ea454021bc54e453c9b31baf7/kubernetes/client/rest.py#L232
It worked as soon as I changed r.data = r.data.decode('utf8') to r.data = r.data.decode('latin1')
Environment:
- Kubernetes version: 1.26
- OS: Fedora 38
- Python version: 3.11.4
- Python client version: 26.1.0
/assign @yliaog
https://github.com/kubernetes-client/python/pull/2100
seems to be similar issue
agreed it's better to support more charset, but python/kubernetes/client/rest.py is generated by openapi generator, so the support needs to be added there.
I had the same problem and applying the above solution worked. So I modified the code a little bit and added a function to allow setting the character set for the client. I attached the diff here. kubernetes.txt
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.