java
java copied to clipboard
Any way to get namespace configuration?
I'm currently using the ClientBuilder.defaultClient() method to create an ApiClient. I like that it covers all the standard ways where configuration might be.
But I also want to know the namespace configuration. The ApiClient doesn't know anything about the namespace. If I want to get the namespace while retaining the flexibility of finding the configuration in all the different places, it seems like I would need to reimplement the logic within ClientBuilder.defaultClient(), dig into the methods it calls, and also read the namespace. But most of the methods it calls that ultimately read the kubeconfig or the mounted service account secret are private.
It looks to me like I'll need to copy/paste a lot of the util methods to find and read these files just to find the namespace configuration. Is that correct? Is there an easier way?
Would it make sense for namespace to be added to the ApiClient and set in the various ClientBuilder methods? And then if a namespace is configured on the ApiClient could that value automatically be used on all client methods that require a namespace arg? (I'm aware that would be a ton more work but it would be very very nice as a user.)
This is a duplicate of #1313 which was implemented as Namespaces.getPodNamespace() (https://github.com/kubernetes-client/java/blob/master/util/src/main/java/io/kubernetes/client/util/Namespaces.java#L29)
Closing this issue since it's already present in the library. Please use /reopen if it doesn't work for you.
/reopen
Namespaces.getPodNamespace() isn't exactly what I'm looking for. (Though it is related and I didn't know about it, so thank you for the pointer). I can't guarantee that my application is running in a pod. It may be, but it may also be running outside a cluster and using a kubeconfig file for its configuration.
Let me broaden my request out from the namespace specifically to all the configuration. I would like a utility method that can give me the information from the configuration files, whether they are the automounted in-pod files or a kubeconfig file. I want the same logic of ClientBuilder.standard(), which checks for configuration in multiple different standard places (e.g. findConfigFromEnv(), then findConfigInHomeDir(), then cluster()).
Except I don't just want an ApiClient, which is all ClientBuilder.standard() provides. I want more information out of the configuration. If that logic of which files to check and in what order were extracted from ClientBuilder.standard() into its own utility method that returned some kind of generic Configuration object, that would satisfy my needs. It could find the configuration using the exact same findConfigFromEnv(), findConfigInHomeDir(), and cluster() methods, maybe with slight modifications.
(And again, I would just use those methods myself in my application, but they are private. I would prefer to avoid having to duplicate them in my application with minor differences.)
@johnflavin-fw: Reopened this issue.
In response to this:
/reopen
Namespaces.getPodNamespace()isn't exactly what I'm looking for. (Though it is related and I didn't know about it, so thank you for the pointer). I can't guarantee that my application is running in a pod. It may be, but it may also be running outside a cluster and using a kubeconfig file for its configuration.Let me broaden my request out from the namespace specifically to all the configuration. I would like a utility method that can give me the information from the configuration files, whether they are the automounted in-pod files or a kubeconfig file. I want the same logic of
ClientBuilder.standard(), which checks for configuration in multiple different standard places (e.g.findConfigFromEnv(), thenfindConfigInHomeDir(), thencluster()).Except I don't just want an
ApiClient, which is allClientBuilder.standard()provides. I want more information out of the configuration. If that logic of which files to check and in what order were extracted fromClientBuilder.standard()into its own utility method that returned some kind of genericConfigurationobject, that would satisfy my needs. It could find the configuration using the exact samefindConfigFromEnv(),findConfigInHomeDir(), andcluster()methods, maybe with slight modifications.(And again, I would just use those methods myself in my application, but they are
private. I would prefer to avoid having to duplicate them in my application with minor differences.)
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I don't think that there is any really strong reason for those methods to be private. Perhaps we should just make them public?
Perhaps we should just make them public?
I think that could work. I've started implementing essentially that change + a little extra. I'd like to run the idea by you before I get too far in.
My idea was to take the private methods from ClientBuilder that find and read the kubeconfig file and move them to KubeConfig. Then I would add a public method that wraps up the private ones and implements the "read from $KUBECONFIG, then $HOME/.kube/config" logic.
Does that sound reasonable?
What expectations would you have for tests? Seems like since I'm moving functionality from ClientBuilder into KubeConfig, the tests of that functionality in ClientBuilderTest could be moved into KubeConfigTest.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.