Cannot change kube config at runtime
What happened (please include outputs or screenshots): For unit testing, I want to change the kubeconfig file at runtime and after the import of the config module. As my tests module imports the productive module, the imports are executed before fixtures and patches are executed. When I then change the value of the KUBECONFIG environment variable, the Kubernetes client ignores that change.
What you expected to happen: Cluster can be switched at runtime by changing the KUBECONFIG env var.
How to reproduce it (as minimally and precisely as possible): This is a short example that shows, that changes to the KUBECONFIG environment variable after the import of the configmodule are ignored. Changes to the env var before the import are followed.
export KUBECONFIG="/path/to/first.kubeconfig"
python not_working.py
# not_working.py
import os
from kubernetes import config
config.load_kube_config()
print(config.list_kube_config_contexts())
os.environ["KUBECONFIG"] = "/path/to/second.kubeconfig"
config.load_kube_config()
print(config.list_kube_config_contexts()) # still shows contexts of first.kubeconfig
export KUBECONFIG="/path/to/first.kubeconfig"
python working.py
# working.py
import os
os.environ["KUBECONFIG"] = "/path/to/second.kubeconfig"
from kubernetes import config
config.load_kube_config()
print(config.list_kube_config_contexts()) # shows contexts of second.kubeconfig
Anything else we need to know?: I've already tried without success:
- config.load_kube_config() with parameters to set the new kubeconfig path
- config.load_config(), with and without parameters to set the new kubeconfig path
- not explicitly loading the kubeconfig
Successful was to run kubectl from Python with subprocess.run(...), that way the updated KUBECONFIG env var is followed. But I need it in this package.
Environment:
-
Kubernetes version (
kubectl version): Client Version: v1.29.3 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.3 -
OS (e.g., MacOS 10.13.6): MacOS 14.4.1
-
Python version (
python --version): 3.12.3 -
Python client version (
pip list | grep kubernetes): 29.0.0
https://github.com/kubernetes-client/python/blob/master/kubernetes/base/config/kube_config.py#L816-L817
def load_kube_config(config_file=None, context=None,
client_configuration=None,
persist_config=True,
temp_file_path=None):
...
if config_file is None:
config_file = KUBE_CONFIG_DEFAULT_LOCATION
As you can see here, this appears to be because it does not use the KUBECONFIG environment variable, but rather uses global variables within the library.
https://github.com/kubernetes-client/python/blob/master/kubernetes/base/config/kube_config.py#L816-L817
def load_kube_config(config_file=None, context=None, client_configuration=None, persist_config=True, temp_file_path=None): ... if config_file is None: config_file = KUBE_CONFIG_DEFAULT_LOCATIONAs you can see here, this appears to be because it does not use the KUBECONFIG environment variable, but rather uses global variables within the library.
Thanks for the answer. But from the code I would expect an
config.load_kube_config(config_file="/path/to/second.kubeconfig")
to be working. But that also does not change the config in the client.
I wonder if the default config behavior is related here: https://github.com/kubernetes-client/python/blob/master/CHANGELOG.md#v1201. Could you take a look?
/assign @roycaihw
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hey, guys! I use this library in my tests. It would be great to fix the problem with change kube config at runtime!
Hello, guys! Do you have any plans to support reset/change config for this client? Thats`s a critical problems with some scenarios with reset and startup new cluster during one test session.
Hello there! It seems this issue has affected many people. Guys, please add an option to reset/replace the config; it significantly impacts tests when we need to rebuild the cluster. Thanks!
could you send a PR? thanks.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
any updates on this?