python
python copied to clipboard
very slow performance due to excessive reconfiguration in modells
What happened (please include outputs or screenshots):
Running this simple script in a big cluster takes about 30 seconds to execute:
import kubernetes as k8s
k8s.config.load_kube_config()
apps = k8s.client.AppsV1Api()
print(k8s.__version__)
print(len(apps.list_replica_set_for_all_namespaces().items))
time python3 script:
24.2.0
3661
real 0m32.394s
user 0m17.764s
sys 0m11.235s
as you see it has very high system cpu usage running this under a profiler shows following interesting things:
957/1 0.002 0.000 32.177 32.177 {built-in method builtins.exec}
1 0.173 0.173 32.177 32.177 test.py:1(<module>)
1 0.000 0.000 31.563 31.563 apps_v1_api.py:3804(list_replica_set_for_all_namespaces)
1 0.000 0.000 31.563 31.563 apps_v1_api.py:3838(list_replica_set_for_all_namespaces_with_http_info)
1 0.000 0.000 31.563 31.563 api_client.py:305(call_api)
1 0.018 0.018 31.563 31.563 api_client.py:120(__call_api)
1 0.000 0.000 28.090 28.090 api_client.py:244(deserialize)
780711/1 0.997 0.000 27.281 27.281 api_client.py:266(__deserialize)
190593/1 0.934 0.000 27.281 27.281 api_client.py:620(__deserialize_model)
36658/1 0.078 0.000 27.281 27.281 api_client.py:280(<listcomp>)
190594 0.839 0.000 22.939 0.000 configuration.py:75(__init__)
190594 0.108 0.000 10.911 0.000 context.py:41(cpu_count)
190594 10.803 0.000 10.803 0.000 {built-in method posix.cpu_count}
190597 0.300 0.000 8.327 0.000 configuration.py:253(debug)
381195 0.216 0.000 7.443 0.000 __init__.py:1448(setLevel)
381195 4.364 0.000 7.117 0.000 __init__.py:1403(_clear_cache)
...
190594 10.803 0.000 10.803 0.000 {built-in method posix.cpu_count}
10 seconds are spent running multiprocessing.cpu_count which accounts for most of the system usage
381195 0.216 0.000 7.443 0.000 __init__.py:1448(setLevel)
7 seconds are spent configuring logging
looking at what causes this appears to be following line in every model:
if local_vars_configuration is None:
local_vars_configuration = Configuration()
This runs the configuration function which sets up logging and calls multiprocessing.cpu_count
commenting the multiprocessing call confirms this, i then runs significantly faster:
real 0m11.964s
user 0m8.162s
sys 0m0.338s
Is there a way to avoid calling Configuration on every model init?
maybe something as simple as this is good?:
--- a/kubernetes/client/api_client.py
+++ b/kubernetes/client/api_client.py
@@ -638,6 +638,7 @@ class ApiClient(object):
value = data[klass.attribute_map[attr]]
kwargs[attr] = self.__deserialize(value, attr_type)
+ kwargs["local_vars_configuration"] = self.configuration
instance = klass(**kwargs)
if hasattr(instance, 'get_real_child_model'):
the script runs i 8 seconds with it.
https://github.com/kubernetes-client/python/blob/4dddad8dc47ba7c3eb26af28cc369e852e4a45db/kubernetes/client/api_client.py#L8
Generated by: https://openapi-generator.tech
any change to the file has to be done in the generator.
/assign @yliaog
I assume this file needs to be changed? https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/main/resources/python-legacy/api_client.mustache
though which version? the code here was generated with 4.3.0 and the latest is 6.2.1 if I interpret #1943 correctly. Is the used version still updated?
what is the best way to get this fixed fast? this issue is quite severe
please submit the fix in the openapi-generator, that is the best way to fix it.
too which version? (in addition to master)
to the latest, then backport it to the version this repo is using currently.
which is?
the generator has been updated on its main branch. the generator does not support old versions https://github.com/OpenAPITools/openapi-generator/pull/13922#issuecomment-1305059264
I strongly recommend to patch this locally for the current release
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@yliaog are there any plans to update the project to use the latest/fixed version openapi-generator? We've been facing the issue described by @juliantaylor and have applied locally a patch with their suggestion, but would be ideal to have it on a release soon 🥺 :)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
this problem still exists in the latest version 28.1.0
if you have no plans to update the generator to the fixed version can you please add this patch locally? this is wasting loads of cpu cycles for every user.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten