The dynamic client example, `kubernetes.client.ApiClient(...)` takes in the result of `kubernetes.config.load_kube_config(...)`, which is empty
Link to the issue (please include a link to the specific documentation or example):
Example where client.ApiClient(configuration=config.load_kube_config()) is used:
https://github.com/kubernetes-client/python/blob/master/examples/dynamic-client/configmap.py#L27
Link to where config function load_kube_config() does not return anything:
https://github.com/kubernetes-client/python/blob/master/kubernetes/base/config/kube_config.py#L799
Description of the issue (please include outputs or screenshots if possible):
I would assume that when configuring an ApiClient, I should send the argument of the configuration object. However, since the config.load_kube_config() does not return anything, I feel that this example is wrong.
I would assume I should do the following:
from kubernetes import config, dynamic
from kubernetes.client import api_client
configuration = client.Configuration()
config.load_kube_config()
client = api_client.ApiClient(configuration=configuration)
Since the config.load_kube_config() has this in the description:
"Loads authentication and cluster information from kube-config file and stores them in kubernetes.client.configuration."
So that the client.Configuration() returned value is updated after being instantiated and thus can be passed to the api_client.ApiClient(...) function.
Can anyone please correct me if I am wrong? I am unsure of what the correct way to get an ApiClient. Thanks in advance!
@fabianvf could you help take a look?
@tonur you're correct, the client is being instatiated there oddly. To create the dynamic client, you instantiate an API client and pass it in, so the proper series of calls would be:
from kubernetes import config, dynamic
from kubernetes.client import api_client
configuration = client.Configuration()
config.load_kube_config()
api_client = api_client.ApiClient(configuration=configuration)
dynamic_client = dynamic.DynamicClient(api_client)
Is it okay that I make a Pull Request to change this in the example? Or is it something you'd rather do?
@tonur absolutely! A PR would be appreciated
@tonur you're correct, the client is being instatiated there oddly. To create the dynamic client, you instantiate an API client and pass it in, so the proper series of calls would be:
from kubernetes import config, dynamic from kubernetes.client import api_client configuration = client.Configuration() config.load_kube_config() api_client = api_client.ApiClient(configuration=configuration) dynamic_client = dynamic.DynamicClient(api_client)
Hey there @fabianvf. I found that this is not working for me at the moment... I have no idea what is going on, but somehow this is not working:
from kubernetes import client
from kubernetes import config as k8sconfig
cluster_namespace = "test-cluster-namespace"
cluster_name = "test-cluster-name"
kubeconfig_file = "/tmp/k8s_config"
configuration = client.Configuration()
k8sconfig.load_kube_config(config_file=kubeconfig_file, context=f"{cluster_name}-admin@{cluster_name}")
cluster_client = client.ApiClient(configuration)
This ends up trying to connect to "localhost:80" instead of the correct IP and Port for the Kubernetes cluster, outlined in the kubeconfig_file.
But this is somehow working:
from kubernetes import client
from kubernetes import config as k8sconfig
cluster_namespace = "test-cluster-namespace"
cluster_name = "test-cluster-name"
kubeconfig_file = "/tmp/k8s_config"
cluster_client = client.ApiClient(configuration=k8sconfig.load_kube_config(config_file=kubeconfig_file, context=f"{cluster_name}-admin@{cluster_name}"))
I don't know what strange magic that the config.load_kube_config() function is returning, I am at a loss for words.
I would propose to document this in the example to discourage future optimists (like me) from spending hours trying to make a more "clean" and "clear" syntax work... 🤦
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.