k8s-on-openstack
k8s-on-openstack copied to clipboard
An opinionated way to deploy a Kubernetes cluster on top of an OpenStack cloud.
k8s-on-openstack
An opinionated way to deploy a Kubernetes cluster on top of an OpenStack cloud.
It is based on the following tools:
kubeadmansible
Getting started
The following mandatory environment variables need to be set before calling ansible-playbook:
OS_*: standard OpenStack environment variables such asOS_AUTH_URL,OS_USERNAME, ...KEY: name of an existing SSH keypair
The following optional environment variables can also be set:
NAME: name of the Kubernetes cluster, used to derive instance names,kubectlconfiguration and security group nameIMAGE: name of an existing Ubuntu 16.04 imageEXTERNAL_NETWORK: name of the neutron external network, defaults to 'public'FLOATING_IP_POOL: name of the floating IP poolFLOATING_IP_NETWORK_UUID: uuid of the floating IP network (required for LBaaSv2)USE_OCTAVIA: try to use Octavia instead of Neutron LBaaS, defaults to FalseUSE_LOADBALANCER: assume a loadbalancer is used and allow traffic to nodes (default: false)SUBNET_CIDRthe subnet CIDR for OpenStack's network (default:10.8.10.0/24)POD_SUBNET_CIDRCIDR of the POD network (default:10.96.0.0/16)CLUSTER_DNS_IP: IP address of the cluster DNS service passed to kubelet (default:10.96.0.10)BLOCK_STORAGE_VERSION: version of the block storage (Cinder) service, defaults to 'v2'IGNORE_VOLUME_AZ: whether to ignore the AZ field of volumes, needed on some clouds where AZs confuse the driver, defaults to False.NODE_MEMORY: how many MB of memory should nodes have, defaults to 4GBNODE_FLAVOR: allows to configure the exact OpenStack flavor name or ID to use for the nodes. When set, theNODE_MEMORYsetting is ignored.NODE_COUNT: how many nodes should we provision, defaults to 3NODE_AUTO_IPassign a floating IP to nodes, defaults to FalseNODE_DELETE_FIP: delete floating IP when node is destroyed, defaults to TrueNODE_BOOT_FROM_VOLUME: boot node instances using boot from volume. Useful on clouds with only boot from volumeNODE_TERMINATE_VOLUME: delete the root volume when each node instance is destroy, defaults to TrueNODE_VOLUME_SIZE: size of each node volume. defaults to 64GBNODE_EXTRA_VOLUME: create an extra unmounted data volume for each node, defaults to FalseNODE_EXTRA_VOLUME_SIZE: size of extra data volume for each node, defaults to 80GBNODE_DELETE_EXTRA_VOLUME: delete the extra data volume for each node when node is destroy, defaults to TrueMASTER_BOOT_FROM_VOLUME: boot the master instance on a volume for data persistence, defaults to TrueMASTER_TERMINATE_VOLUME: delete the volume when master instance is destroy, defaults to TrueMASTER_VOLUME_SIZE: size of the master volume. default to 64GBMASTER_MEMORY: how many MB of memory should master have, defaults to 4 GBMASTER_FLAVOR: allows to configure the exact OpenStack flavor name or ID to use for the master. When set, theMASTER_MEMORYsetting is ignored.AVAILABILITY_ZONE: the availability zone to use for nodes and the defaultStorageClass(defaults tonova). This affectsPersistentVolumeClaimswithout explicit a storage class.HELM_REPOS: a list of additional helm repos to add, separated by semicolons. Example:charts* https://github.com/helm/charts;mycharts https://github.com/dev/mychartsHELM_INSTALL: a list of helm charts and their parameters to install, separated by semicolons. Example:mycharts/mychart;charts/somechart --name somechart --namespace somenamespace
Spin up a new cluster:
$ ansible-playbook site.yaml
Destroy the cluster:
$ ansible-playbook destroy.yaml
Upgrade the cluster:
The upgrade.yaml playbook implements the upgrade steps described in https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/
After editing in group_vars/all.yaml the kubernetes_version and kubernetes_ubuntu_version variables, you can run the following commands.
$ ansible-playbook upgrade.yaml
$ ansible-playbook site.yaml
Open Issues
Find a better way to configure worker nodes' network plugin
Somehow, the network plugin (kubenet) is not correctly set on the worker node. On the master node /var/lib/kubelet/kubeadm-flags.env (created by kubeadm init) contains:
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --cloud-provider=external --network-plugin=kubenet --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf"
It contains the correct --network-plugin=kubenet as configured here. After joining the k8s cluster, the worker node's copy of /var/lib/kubelet/kubeadm-flags.env (created by kubeadm join) looks like this:
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf"
It contains --network-plugin=cni despite setting network-plugin: kubenet here. But the JoinConfiguration is ignored by kubeadm join when using a join token.
Once I edit /var/lib/kubelet/kubeadm-flags.env to contain --network-plugin=kubenet, the worker node goes online. I've added a hack in roles/kubeadm-nodes/tasks/main.yaml to set the correct value.
Prerequisites
- Ansible (tested with version 2.9.1)
- Shade library required by Ansible OpenStack modules (
python-shadefor Debian)
CI/CD
The following environment variables needs to be defined:
OS_AUTH_URLOS_PASSWORDOS_USERNAMEOS_DOMAIN_NAME
Authors
- François Deppierraz [email protected]
- Oli Schacher [email protected]
- Saverio Proto [email protected]
- @HaseHarald https://github.com/HaseHarald
- Dennis Pfisterer https://github.com/pfisterer
References
- https://kubernetes.io/docs/getting-started-guides/kubeadm/
- https://www.weave.works/docs/net/latest/kube-addon/
- https://github.com/kubernetes/dashboard#kubernetes-dashboard