terratest
terratest copied to clipboard
support in-cluster authentication
Hello, thanks for this awesome project!
Many of our end to end tests are running inside a Kubernetes cluster. As we want to interact with the Kubernetes API inside the same cluster where the tests are being executed, it would be helpful to have support for in-cluster authentication via service account tokens as well.
This is already widely supported in client-go, see: https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration#authenticating-inside-the-cluster
I would be open to help with a PR, if you have any guidance on how to best implement it within your framework.
My starting point would be inside GetKubernetesClientFromOptionsE and maybe adding an additional config option to KubectlOptions
to indicate in-cluster authentication.
I think we'd welcome a PR here. @yorinasub17 Thoughts on the approach to take?
This makes sense! In terms of approach, we can use a method similar to what we do in kubergrunt
, which already supports alternative authentication schemes.
Sounds great, I will try to implement a first draft asap. Thanks for your reply!
@yorinasub17 I looked into it a bit more, unfortunately adding another option to KubectlOptions
in kubectl_options.go
would be a breaking change, as the signature of NewKubectlOptions
would change.
We could avoid the breaking change, by simply falling back to in-cluster authentication, when LoadApiClientConfigE
fails, would that make sense?
Do you have any preference or other ideas?
FYI, I already created a MR with the fallback solution: https://github.com/gruntwork-io/terratest/pull/760
I looked into it a bit more, unfortunately adding another option to KubectlOptions in kubectl_options.go would be a breaking change, as the signature of NewKubectlOptions would change.
I think we can introduce a new function NewKubectlOptionsWithInClusterAuth
to maintain backwards compatibility, but the fallback approach also makes sense to me. Will try to review the PR today!
Awesome, let me know if you would prefer NewKubectlOptionsWithInClusterAuth
over the current PR, then I will prepare one accordingly.
This is not fully fixed yet - https://github.com/gruntwork-io/terratest/blob/v0.34.4/modules/k8s/tunnel.go#L152 still tries to read the config file. To me it sounds like this function should get the config from a clientset.
@dee-kryvenko Yup that's why the ticket is still open. The PR implements a pathway to use incluster auth, but there isn't an intuitive mechanism to make that work (which a function NewKubectlOptionsWithInClusterAuth
would handle).
That said, as a workaround, you should be able to force it to look up the incluster auth if you pass in a non existant path for the kubeconfig file path to NewKubectlOptions
:
opts := k8s.NewKubectlOptions("dont-exist", "/non-existant/path", "")
tun := k8s.NewTunnel(opts, ... other args omitted for brevity...)
That workaround wouldn't work as ForwardPortE
explicitly using this path and it fails when the file didn't existed. Workaround that worked for me is to basically create an empty config file like
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
This tricks LoadApiClientConfigE
under ForwardPortE
to pass and it seems to pick up in-cluster config down the road anyway, so I am not blocked.
From what I can see - the challenge in ForwardPortE
is to capture the config
so it can later be passed to spdy
. Sounds like a backward compatible version of GetKubernetesClientFromOptionsE
could be added that returns config
along with a clientset
, so GetKubernetesClientFromOptionsE
can just fall back to that new function.
There is now a k8s.NewKubectlOptionsWithInClusterAuth
that will return a client that only uses In cluster auth starting with https://github.com/gruntwork-io/terratest/releases/tag/v0.35.3
I'm not sure if it helps or not.
First of all in the specific tunnel
case I referenced - it needs not just a clientset but a config to pass it to spdy.RoundTripperFor(config)
down the road.
Second - I am not sure why the caller needs to worry about setting specific option. Why won't it use default discovery mechanism i.e. explicit options -> env variable pointing to KUBECONFIG
-> default ~/.kube/config
-> attempt to use InCluster auth?
First of all in the specific tunnel case I referenced - it needs not just a clientset but a config to pass it to spdy.RoundTripperFor(config) down the road.
Ah you are right. That is my mistake. This only solves half the problem, not all.
Second - I am not sure why the caller needs to worry about setting specific option. Why won't it use default discovery mechanism i.e. explicit options -> env variable pointing to KUBECONFIG -> default ~/.kube/config -> attempt to use InCluster auth?
In testing, it is often the case that you run multiple tests in parallel against different environments, and in these cases, the CLI config options (which usually targets a single cluster) are not usually enough to select the targets effectively. This is why terratest is setup to have the targets be explicitly specified in the code.
That said, the default chain (when everything is ""
or nil
) should certainly support what you described.
This is unfortunately not an immediately need/priority for Gruntwork (the maintainers of terratest), so it's not going to be done anytime soon by us, but if anyone in the community wants to take a crack at reorganizing the authentication chain for the k8s functions to support what I described above, a PR to implement it is very welcome.
No I understand - depends on what exactly are you testing and the test scenario, you might need to be explicit. That's why I put explicit options first in the priority list, but honestly for the most of my tf modules I would need it to automatically discover either my local kubeconfig or use InCluster if it run in a CI. Having to put that discovery logic into my every test would seem redundant. Just a usability suggestion. Maybe auto discovery can be implemented as a separate option, i.e. the user would need explicitly enable it? This issue is on my todo list - hopefully I'll get to it soon.