Bad return for json response
Hi :wave:,
I have a valid json log messages in my container.
I would like to get a last log message:
from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
namespace = "namespacename"
pod = "podname"
ret = v1.read_namespaced_pod_log(
name=pod,
namespace=namespace,
tail_lines=1,
)
print(ret)
It returns:
{'time': '2024-02-19T13:32:53.08943787Z', 'level': 'INFO', 'msg': 'some msg'}
This is incorrect. There is single quotes in return. It should be:
{"time": "2024-02-19T13:32:53.08943787Z", "level": "INFO", "msg": "some msg"}
I think, that the reason for this in https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api_client.py#L260.
Eventually, for valid json we have this transformation (after deserialization):
str(json.loads(response))
Environment:
- Kubernetes version (
kubectl version): 1.23 - OS (e.g., MacOS 10.13.6): linux (fedora)
- Python version (
python --version): 3.11.7 - Python client version (
pip list | grep kubernetes): 29.0.0
I think, that the reason for this in https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api_client.py#L260.
The code is generated. The upstream code generator is https://github.com/OpenAPITools/openapi-generator. @dyens Could you check if the change should be made upsteram?
Thanks for the detailed bug report. This has been fixed in the latest master of OpenAPITools/openapi-generator It will be released in v7.5.0
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.