Debug mode doesn't remain active
What happened (please include outputs or screenshots):
I have tried your recommendation @roycaihw mentioned in https://github.com/kubernetes-client/python/issues/1775#issuecomment-1100213205 but I have noticed a strange thing: debug mode "switches back to off" automatically after receiving the response back from the server. Here is the call stack at that moment:
setLevel, __init__.py:1421
debug, configuration.py:271
__init__, configuration.py:126
__init__, v1_managed_fields_entry.py:58
__deserialize_model, api_client.py:641
__deserialize, api_client.py:303
<listcomp>, api_client.py:280
__deserialize, api_client.py:280
__deserialize_model, api_client.py:639
__deserialize, api_client.py:303
__deserialize_model, api_client.py:639
__deserialize, api_client.py:303
deserialize, api_client.py:264
__call_api, api_client.py:192
call_api, api_client.py:348
read_namespaced_config_map_with_http_info, core_v1_api.py:22802
read_namespaced_config_map, core_v1_api.py:22715
... application code
I even tried changing the default one, here is the full repro code:
import kubernetes
kubernetes.config.load_kube_config(context='minikube')
client_config = kubernetes.client.Configuration.get_default_copy()
client_config.debug = True
kubernetes.client.Configuration.set_default(client_config)
core_api_client = kubernetes.client.CoreV1Api()
response_with_debug_logging = core_api_client.read_namespaced_config_map(
name='my-state', namespace='my-namespace'
)
response_without_debug_logging = core_api_client.read_namespaced_config_map(
name='my-state', namespace='my-namespace'
)
but the call setLevel mutates global logging configuration and can't be reset back. Actually, based on this code, I got debug working again by manually setting again the debug, but that seems like a very wrong thing to do.
What you expected to happen:
I would prefer if, after creating an API client and choosing to activate debug mode, that it stays active throughout the lifetime of the request.
How to reproduce it (as minimally and precisely as possible):
- Follow the unofficial debug activation process as outlined in here and referenced here.
- Execute more than 1 API call and you will see that the code flips back to non-debug and no request body is logged
Anything else we need to know?:
I found 2 ways of avoiding this problem:
- creating new API client for each interaction, because that one then takes the default config and maintains debug property
- calling the set debug property after each response comes back
Is there a recommendation to always use 1) perhaps? I wouldn't expect that we shouldn't reuse API client object and we should perhaps throw it away each time, but perhaps I am wrong since I'd expect that is wasteful?
Environment:
-
Kubernetes version (
kubectl version):Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.6", GitCommit:"f59f5c2fda36e4036b49ec027e556a15456108f0", GitTreeState:"archive", BuildDate:"1980-01-01T00:00:00Z", GoVersion:"go1.16.13", Compiler:"gc", Platform:"darwin/amd64"}Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"} -
OS (e.g., MacOS 10.13.6): MacOs 12.3.1
-
Python version (
python --version): 3.9.9 -
Python client version (
pip list | grep kubernetes): 23.3.0
/assign
Hi @milanaleksic I think this is cause by Configuration instance reinitialization.
Hi @roycaihw I would like to assign this issue. However, I am not very familiar this project. Please kindly support me if I need help from community :)
Hi @roycaihw , do you know what does local_vars_configuration mean?
local_vars_configuration is a parameter in every model object, but it does not be passed into klass instance when func __deserialize_model is called, then the Configuration reinitialization will be triggered.
However, these codes are auto generated by OpenAPI Generator, can't be edited manually! Do you have any suggestions to resolve this issue?
Hi @milanaleksic , I have one way to avoid this problem temporary: Insert below codes into this line in your local env
if self.configuration.debug:
kwargs['local_vars_configuration'] = 'not None'
Hi @milanaleksic , I have one way to avoid this problem temporary: Insert below codes into this line in your local env
Hi, if your recommendation is that I locally patch version of the python k8s client, I'd actually prefer to always construct the client since it's less hacky
cc @roycaihw need to take a look at this. Debug mode is an upstream feature. It could be a bug in the upstream
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
We got hit by this too.
A quick analysis shows:
get_default_copy()does a deep-copy of the configuration, except for thelogger, which is shallow copied- a breakpoint in the
debugsetter showsclient/models/v1_list_metacreates a new (empty)Configuration()(which before 12.0.0 cloned the currently live default configuration), which re-configures the shared logger with debug=False.
-> ret = apps_v1_client.list_namespaced_deployment(namespace, label_selector=label_selector)
/usr/lib/python3.10/site-packages/kubernetes/client/api/apps_v1_api.py(3320)list_namespaced_deployment()
-> return self.list_namespaced_deployment_with_http_info(namespace, **kwargs) # noqa: E501
/usr/lib/python3.10/site-packages/kubernetes/client/api/apps_v1_api.py(3435)list_namespaced_deployment_with_http_info()
-> return self.api_client.call_api(
/usr/lib/python3.10/site-packages/kubernetes/client/api_client.py(348)call_api()
-> return self.__call_api(resource_path, method,
/usr/lib/python3.10/site-packages/kubernetes/client/api_client.py(192)__call_api()
-> return_data = self.deserialize(response_data, response_type)
/usr/lib/python3.10/site-packages/kubernetes/client/api_client.py(264)deserialize()
-> return self.__deserialize(data, response_type)
/usr/lib/python3.10/site-packages/kubernetes/client/api_client.py(303)__deserialize()
-> return self.__deserialize_model(data, klass)
/usr/lib/python3.10/site-packages/kubernetes/client/api_client.py(639)__deserialize_model()
-> kwargs[attr] = self.__deserialize(value, attr_type)
/usr/lib/python3.10/site-packages/kubernetes/client/api_client.py(303)__deserialize()
-> return self.__deserialize_model(data, klass)
/usr/lib/python3.10/site-packages/kubernetes/client/api_client.py(641)__deserialize_model()
-> instance = klass(**kwargs)
/usr/lib/python3.10/site-packages/kubernetes/client/models/v1_list_meta.py(52)__init__()
-> local_vars_configuration = Configuration()
/usr/lib/python3.10/site-packages/kubernetes/client/configuration.py(126)__init__()
-> self.debug = False
> /usr/lib/python3.10/site-packages/kubernetes/client/configuration.py(260)debug()
-> self.__debug = value
pseudo code
kubernetes.config.load_kube_config()
configuration = kubernetes.client.Configuration.get_default_copy()
configuration.debug = True
kubernetes.client.Configuration.set_default(configuration)
logger.setLevel(logging.DEBUG)
apps_v1_client = kubernetes.client.AppsV1Api()
ret = apps_v1_client.list_namespaced_deployment(namespace, label_selector=label_selector)
(stacktrace with 22.6.0, but bug reproducted with 26.1.0)
it's probably broken since 12.x, which changed the behavior of Configuration(); not sure what's the best way forward:
Configuration()creates a non-shared logger?- all load*config create a non-shared logger (I assume they start from
Configuration())? get_default_copy()creates a non-shared logger?- a better/simpler way to enable debug mode directly on the default configuration? (not sure it would directly fix things though)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.