kubevirtci icon indicating copy to clipboard operation
kubevirtci copied to clipboard

Container control plane

Open aerosouund opened this issue 1 year ago • 9 comments

What this PR does / why we need it:

Runs the control plane of the kubevirtCI cluster in containers to discount on resource consumption, expects to reduce a runner per CI lane run across all repos It is based on #1230

In a typical kubevirtCI cluster, the control plane is unschedulable. As seen in this snippet, only the system containers are on it

[vagrant@node01 ~]$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -A -o wide | grep 101
kube-system   calico-node-xwvhg                          1/1     Running   0          3m47s   192.168.66.101   node01   <none>           <none>
kube-system   etcd-node01                                1/1     Running   1          4m1s    192.168.66.101   node01   <none>           <none>
kube-system   kube-apiserver-node01                      1/1     Running   1          4m1s    192.168.66.101   node01   <none>           <none>
kube-system   kube-controller-manager-node01             1/1     Running   1          4m1s    192.168.66.101   node01   <none>           <none>
kube-system   kube-proxy-qcwmn                           1/1     Running   0          3m47s   192.168.66.101   node01   <none>           <none>
kube-system   kube-scheduler-node01                      1/1     Running   1          4m2s    192.168.66.101   node01   <none>           <none>

Design

There is no standardized tool or technology that achieves what this PR tries to achieve. Atleast not in a way that matches the needs of kubevirtCI. One of the big requirements is that the control plane joining process remains the same for workers (through kubeadm) whether you are joining a VM control plane or a container control plane and so the PR provides its own way of provisioning certificates, running DNS in the cluster and many other rudimentary k8s concepts.

The code for the control plane container lives in cluster-provision/gocli/control-plane. The creation of the control plane happens in a way similar to how kubeadm provisions a cluster, through the concept of phases. below is a snippet from its main function to illustrate how some of the phases are being called

	if err := NewCertsPhase(defaultPkiPath).Run(); err != nil {
		return nil, err
	}

	if err := NewRunETCDPhase(cp.dnsmasqID, cp.containerRuntime, defaultPkiPath).Run(); err != nil {
		return nil, err
	}

	if err := NewKubeConfigPhase(defaultPkiPath).Run(); err != nil {
		return nil, err
	}

	if err := NewRunControlPlaneComponentsPhase(cp.dnsmasqID, cp.containerRuntime, defaultPkiPath, cp.k8sVersion).Run(); err != nil {
		return nil, err
	}

The phases it runs are:

  • Certs: provisioning the cluster certificate authority and the certificates of individual components signed by this CA
  • ETCD: Runs etcd
  • Kubeconfig: Create the admin, controller manager & scheduler kubeconfig files
  • Bootstrappers RBAC: Create a bootstrap token secret and the necessary RBAC roles for the kubelet to register itself to the api
  • Bootstrap auth resources: Create important resources that kubeadm expects to find in the cluster during joining
  • Kube Proxy: Deploy Kube Proxy
  • CNI: Create the Calico CNI that would be previously created by node01
  • CoreDNS: Deploy CoreDNS

This then gets instantiated in the KV provider to start it

	if kp.Nodes > 1 {
		runner := controlplane.NewControlPlaneRunner(dnsmasq, strings.Split(kp.Version, "-")[1], uint(kp.APIServerPort))
		c, err = runner.Start()
		if err != nil {
			return err
		}
		k8sClient, err := k8s.NewDynamicClient(c)
		if err != nil {
			return err
		}
		kp.Client = k8sClient
	}

and node01 will now only get called if the node count is 1

	if nodeIdx == 1 && kp.Nodes == 1 {
		n := node01.NewNode01Provisioner(sshClient, kp.SingleStack, kp.NoEtcdFsync)

Changes to the networking setup

Only one change is required in dnsmasq.sh

  if [ ${NUM_NODES} -gt 1 ] && [ $i -eq 1 ]; then
    ip tuntap add dev tap101 mode tap user $(whoami)
    ip link set tap101 master br0
    ip link set dev tap101 up
    ip addr add 192.168.66.110/24 dev tap101
    ip -6 addr add fd00::110 dev tap101
    iptables -t nat -A PREROUTING -p tcp -i eth0 -m tcp --dport 6443 -j DNAT --to-destination 192.168.66.110:6443
  fi

If the node count is higher than 1 create an interface called tap101 and manually assign it the required IPs and forward port 6443 on it through eth0. Since the api server container gets launched in the same netns as dnsmasq all whats needed afterwards is that the server advertises the IP 192.168.66.110 as the api server endpoint. and this is taken care of in the code

Current state

The PR is under testing locally to see if all previously existing functionality is retained. As well as code cleaning for the final presentation. The PR is opened as a draft for the community to see it and give feedback on any points to help drive the direction of the project due to its size

Extra notes cc: @dhiller @brianmcarey @acardace @xpivarc

Checklist

This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR. Approvers are expected to review this list.

Release note:


aerosouund avatar Nov 06 '24 10:11 aerosouund

Skipping CI for Draft Pull Request. If you want CI signal for your change, please convert it to an actual PR. You can still manually trigger a test run with /test all

kubevirt-bot avatar Nov 06 '24 10:11 kubevirt-bot

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

kubevirt-bot avatar Nov 06 '24 10:11 kubevirt-bot

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign davidvossel for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

kubevirt-bot avatar Nov 06 '24 10:11 kubevirt-bot

Thanks for your pull request. Before we can look at it, you'll need to add a 'DCO signoff' to your commits.

:memo: Please follow instructions in the contributing guide to update your commits with the DCO

Full details of the Developer Certificate of Origin can be found at developercertificate.org.

The list of commits missing DCO signoff:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

kubevirt-bot avatar Nov 16 '24 17:11 kubevirt-bot

@brianmcarey

Valid concerns for sure, let me discuss them

Overall, I am not convinced of the resources that this will save as we will just be running the kubernetes control plane components somewhere else.

True that the control plane will run somewhere else, but you would not be reserving a big amount of compute for it. Currently, whatever amount you reserve for VMs in the cluster is what gets allocated to the control plane which can be rather big sometimes.

In general, the majority of resource saving is accrued from being given the ability to schedule workload pods (istio, cdi, multus.. etc) on any node in the cluster. Meaning that you have unlocked the total sum of resources used by those containers to be allocatable on node 1 which was previously the control plane.

We will still have a requirement to have a control plane node for our testing in kubevirt/kubevirt as a lot of the KubeVirt infra components require a control plane node to be scheduled - kubevirt/kubevirt#11659

Yes, i ran into this issue while testing this PR. and for now the current hacky fix i am using is by labeling a random node as the control plane even though it isn't and that seems to be enough to make it work.

But if we check the reasons mentioned in the PR, they say its because

if you take over virt-controller, you can create a pod in every namspace with an image of your choosing, mounting any secret you like and dumping it. if you take over virt-operator you can directly create privileged pods if you take over virt-api you can inject via modifying webhooks your own malicius kernel, image

I am still reading their full rationale behind this, but based on what they say these risks will be present wherever the components are scheduled. And also to my knowledge (and correct me if i am wrong), we aren't providing any additional security hardening on the control plane node. So the problems are present in all cases.

And an opinionated take i have on this is that KubeVirtCI is an ephemeral cluster creator and not meant for any long lived clusters. Also, clusters it creates run on isolated environments (atleast in CI thats how it is) . And so in terms of priorities CI efficiency and resources beats security

These KubeVirt infra components can be sensitive to things like selinux policies and kernel modules so running in a controlled VM adds some benefits here.

If i understand correctly you are saying KV infra components are best suited for being ran on a VM. Well, under this PR they still are. What has changed is that the control plane components are running elsewhere and all nodes are being labeled as workers (which frees up node01)

We could make this container control plane configurable but I don't see it running in the main CI workloads and I am not sure how much it will be used in local dev environments as if a someone wants to test against a lightweight cluster setup, we have the kind cluster providers which provides a very lightweight cluster.

It has challenges for sure, but i believe that it has very strong candidacy to run in CI and no challenge (so far) seems so glaring to imply its impossible to run it in CI. The latest of them being the timing out of validation webhooks due to the api server not being in the pod network. This was overcame by using konnectivity.

Happy to discuss this further. Let me know if you have any opinions on what i said or if anything i said needs correction

aerosouund avatar Nov 20 '24 12:11 aerosouund

True that the control plane will run somewhere else, but you would not be reserving a big amount of compute for it. Currently, whatever amount you reserve for VMs in the cluster is what gets allocated to the control plane which can be rather big sometimes.

In general, the majority of resource saving is accrued from being given the ability to schedule workload pods (istio, cdi, multus.. etc) on any node in the cluster. Meaning that you have unlocked the total sum of resources used by those containers to be allocatable on node 1 which was previously the control plane.

We do use the control plane node (node01) for scheduling test workloads so I am not sure if the resources are wasted. node01 is schedulable. For instance here you can see a test VM on node01 - https://storage.googleapis.com/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/13208/pull-kubevirt-e2e-k8s-1.31-sig-compute/1859000686096683008/artifacts/k8s-reporter/3/1_overview.log

What would the benefit of this approach be over just running a single node kubevirtci cluster? What kind of resource savings are seeing by moving the kubeernetes control components out of the VM? I don't think does components are that heavy on resources but maybe I am wrong.

brianmcarey avatar Nov 21 '24 16:11 brianmcarey

@brianmcarey

We do use the control plane node (node01) for scheduling test workloads so I am not sure if the resources are wasted. node01 is schedulable. For instance here you can see a test VM on node01 - https://storage.googleapis.com/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/13208/pull-kubevirt-e2e-k8s-1.31-sig-compute/1859000686096683008/artifacts/k8s-reporter/3/1_overview.log

Based on this it seems that some components are indeed scheduled on the control plane, I need to investigate per component why that is as some components actively check for the control plane label on the node to be scheduled on them (kubevirt CR job for example), but by default the control plane is not taking any pods on it. My take is at tests with sufficient scale those components (KubeVirtCI components) can indeed take up sizable resources. I can try to provide hard numbers for this

What would the benefit of this approach be over just running a single node kubevirtci cluster? What kind of resource savings are seeing by moving the kubeernetes control components out of the VM? I don't think does components are that heavy on resources but maybe I am wrong.

You are right in saying that moving the control plane components isn't gonna result in high savings as they aren't the main culprit in resource consumption. The benefit is that by taking them out, all your nodes are now workers and you can treat them all as a shared resource pool, rather than giving a particular node special treatment. This means that the kubevirtci components don't need to take into account scheduling laws or things like that and can sit on any node. Effectively achieving better resource utilization and savings across the cluster.

In general, the sensible way for move forward with this project is to:

1- Get it working for all KubeVirtCI test cases and lanes 2- See if we can actually start downsizing some testing workloads (by changing the job definition) and if we see improvements we can have this be our new standard

Let me know what you think

aerosouund avatar Nov 22 '24 10:11 aerosouund

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot avatar Feb 20 '25 10:02 kubevirt-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot avatar Mar 22 '25 11:03 kubevirt-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot avatar Apr 21 '25 11:04 kubevirt-bot

@kubevirt-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

kubevirt-bot avatar Apr 21 '25 11:04 kubevirt-bot