vitess-operator
vitess-operator copied to clipboard
Add an option to use k8s as the topology server
We recently merged the k8s topology into Vitess. This allows Vitess to run without requiring a dedicated topology service. It's actually so useful that it's the default for the 2.X Helm charts.
It should be pretty easy to update the operator to support the k8s topo. It will just need to create the following extra resources:
- The
vitesstopologynode
crd (could be done during operator installation) - ServiceAccounts for all component pods that need to use the topology.
- RBAC for the ServiceAccounts that allow them full access to
vitesstopologynodes
- Set the right topo flags, it should just be
-topo_implementation=k8s
the root doesn't change and the server_address doesn't matter because the components will use their in-cluster kubeconfig to detect everything else by default.
That makes sense, as long as the extra CRDs and RBAC are in optional extra files that aren't included in the main kustomize component. To continue supporting multi-region deployments, we'll need to keep the default topo implementation as something that supports running instances in a quorum across regions for HA global topo.
We'll also need to import the k8s topo plugin in pkg/operator/toposerver/connpool.go
since the operator talks directly to Vitess topo as well.
the server_address doesn't matter because the components will use their in-cluster kubeconfig
Out of curiosity, does the k8s topo plugin support optionally connecting and authenticating to a remote k8s API server? That would give users the option of picking k8s topo for the cell-local topos in a multi-region deployment.
Out of curiosity, does the k8s topo plugin support optionally connecting and authenticating to a remote k8s API server? That would give users the option of picking k8s topo for the cell-local topos in a multi-region deployment.
Yes, you can set the following flags to use a specific kubeconfig or override context/namespace:
-topo_k8s_context string
-topo_k8s_kubeconfig string
-topo_k8s_namespace string
The default is to use the in-cluster config if topo_k8s_kubeconfig
is not set.
We actually use those flags if you export TOPO=k8s
before doing the local example. I actually did all the k8s topo development that way, only using k8s for the topo server and running all components locally.
Hi, I would like to have this option added and I am willing to attempt implementing it if needed if someone is willing to provide guidance.
Do vtctld, vtgate, vttablet, vtctld ALL get the -topo_implementation=k8s
option? What happens to the etcd/globalLockServer options?
This would be great to reduce the complexity of the environment.
#204 makes it so that you can use the k8s
topo, though it doesn't implement some of the quality-of-life improvements suggested in this thread.
In your cluster spec, you have to specify
globalLockserver:
external:
address: "kubernetes.default.svc"
implementation: "k8s"
rootPath: "/some/root/path"
You have to install the CRD https://github.com/vitessio/vitess/blob/main/go/vt/topo/k8stopo/VitessTopoNodes-crd.yaml
You need to add an RBAC for the service account with this rule:
apiGroups: ["topo.vitess.io"],
resources: ["vitesstoponodes"],
verbs: ["*"],
Hi, is there an example on how we can use the k8s as the topology server?
An example of a vitess cluster, a yaml file.