Support running kubecfg in cluster
Current when I try and run this in a pod I get:
invalid configuration: default cluster has no server defined
I think this involves (1) using the internal cluster client and (2) setting RBAC correctly. This is totally doable, but I'm also curious if you can say a bit about the use case?
(1) using the internal cluster client
I've got a branch that almost makes this work, will push it tomorrow.
(2) setting RBAC correctly
yes I've done this already (jsonnet template FTW)
This is totally doable, but I'm also curious if you can say a bit about the use case?
I want to run kubecfg diff in a loop, and export the exit code to Prometheus (using https://github.com/tomwilkie/prom-run) - then alert when the configs don't match.
Urgh, thanks for reporting. Running in-cluster was definitely meant to be a supported use case. There were long-running client-go bugs re running in-cluster but doing operations against another namespace - so if it's easy to do so, please verify that case works too.
It's not pretty, but @sebgoa points out that this works in-cluster as a workaround:
token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubecfg --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --token $token --server https://kubernetes:443 update ...
(assuming RBAC allows the relevant service account to actually perform an update)