Feature Request: Multi-master configuration
I believe it should be possible to specify multiple master-servers and/or multiple kubeconfigs in order to fully support a multi-master setup.
I generally use kubernetes.default.svc as much as possible, since k8s manages load balancing and failover between my 3 apiservers. Kube-router however, obviously cannot use the cluster service endpoint, since kube-router is responsible for setting up routing to the service cidr itself (chicken and the egg).
We currently have a virtual IP that we use for disaster recovery / hard failover between apiservers, but I think it would be better to let kube-router be configured with all available masters and let it be capable of switching between those configured in a graceful manner.
--master addrSlice
--kubeconfig pathSlice
thanks @johanot for the feature request. So are you suggesting the kube-router should switch to alternative master/kubeconfig from the slice if it figures the currently active master is no longer functioning?
@murali-reddy Yes, something like that. I'm not expecting any kind of advanced load balancing algorithm. Just the ability for failover between apiservers.
We are running a multi-master setup, but currently we cannot pull out the apiserver kube-router uses, for maintenance, without the entire cluster becoming "static", because kube-router cannot inspect cluster changes and reconcile routes and policies accordingly.
I would also like this feature as I have to kill the kube-router pods in my cluster whenever I rebuild master nodes as some nodes stop routing properly. For a workaround, I'm looking to see if there is a way to let Kubernetes detect with the kube-router gets into this state and kill/restart it automatically.
same issue but for kube-proxy https://github.com/kubernetes/kubernetes/issues/18174
PR to support from client-go https://github.com/kubernetes/kubernetes/pull/40674 but very likely wont get merged
Relate kubernetes issue https://github.com/kubernetes/kubernetes/issues/19161. to help people find here.
Assuming the service CIDR is routable in the environment, one easy (likely not ideal) way to work around the chicken and egg problem is to change the kube-router container entrypoint to fall back to the local apiserver if kubernetes.default.svc is not available, for example:
command:
- /bin/sh
- -c
- |
set -x
curl -k -m 5 https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}
if [ $? -ne 0 ]
then
export KUBERNETES_SERVICE_HOST=127.0.0.1
export KUBERNETES_SERVICE_PORT=6443
fi
exec /usr/local/bin/kube-router \
--run-router=true \
--run-firewall=false \
--run-service-proxy=true \
--advertise-cluster-ip=true \
--bgp-graceful-restart=true \
--enable-ibgp=false \
--enable-overlay=false \
--nodes-full-mesh=false \
--metrics-port=8080 \
--enable-pod-egress=false \
--hairpin-mode=true \
--auto-mtu=true
Basically the first time the cluster is being built, kube-router falls back on the local address of the node which is the first master node in this case.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stale for 5 days with no activity.