kube2iam
kube2iam copied to clipboard
IAM for non-kube containers on same host/interface
We run Rancher as our k8s distribution; which runs all the k8s services (kubelet, apiserver, etc) as Docker containers themselves. This seems to cause a small issue with kube2iam; these containers can't get any role from kube2iam, therefore breaking k8s integrations. Is there any way to get kube2iam to keep handing out credentials to these containers?
It's probably a good idea to create isolated roles for kube-controller-manager/kube-cloud-controller and add an annotation for their static manifests.
Not sure how you could do that on rancher tho, maybe they should implement that.
@ConstantineXVI Were you able to workaround this problem?
I am currently running into this using Rancher/K8s. It seems that new nodes I spin up with annotations seem to get roles fine, however system nodes running the system containers, even after being recreated, are not able to auth using kube2iam - therefore fail to come up/run at all if recreated. I noticed this first getting an issue of creating load balancers for service.
EDIT:
I was able to use kube2iam within Rancher by not setting the host.iptables=true
argument. In coordination with not scheduling any containers on the non-compute hosts (using separated plane deployment for K8s) and adding the iptables rules on the compute hosts as defined in the kube2iam project, this seems to be a working solution. While it is still not ideal, it is a workaround for now.
Please note that I am currently in my first run of K8s/Rancher and not using these clusters in a production environment, so please do not take this post as factual without doing your own due diligence.
@elocnatsirt
I managed to get around this issue by leaving the host.iptables=true value set and adding a nodeSelector to the Kube2iam daemonset:
nodeSelector:
compute: "true"