Example: deploy Cozystack on top of plain Ubuntu using kubeadm
I am going to share an example of how to deploy Cozystack on top of two Ubuntu machines with kubeadm installed on them. I wasn't sure if this is something that Cozystack would be keen to support in the future. Similarly, I wasn't sure in which section it might belong in the Cozystack docs, but this can still be used as an example.
We have two identical Ubuntu machines, one master & one worker:
kubeadm - 10.1.1.10
kubeadm-worker - 10.1.1.11
cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.2 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
uname -r
6.8.0-60-generic
The way to manage kubeadm is up to the end user, but we must emphasize that this kubeadm setup should not be considered production-ready. It is merely an example of how kubeadm can be deployed and used, and it demonstrates what settings might be needed for a successful Cozystack installation:
(We run this on both ubuntu machines)
sudo apt install -y containerd
# ⚙️ Default config file
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
# 🔁 Restart and enable service
sudo systemctl restart containerd
sudo systemctl enable containerd
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo bash -c 'cat >> /etc/sysctl.conf <<EOT
fs.inotify.max_user_watches=2099999999
fs.inotify.max_user_instances=2099999999
fs.inotify.max_queued_events=2099999999
net.ipv4.ip_forward=1
EOT'
sudo sysctl -p
If you plan to use Virtualization module in Cozystack, we need to modify containerd configuration file:
#Add this to /etc/containerd/config.toml file
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = true
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
Initialise master node:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16 --service-dns-domain=cozy.local --skip-phases=addon/kube-proxy
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# we want pods to be scheduled on master node too
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
After running the kubeadm init command, you will receive the kubeadm join command with a token to join the kubeadm cluster. Do it on the worker.
If you plan to use Linstor, there are a few things to take into account. DRBD 9.0 modules need to be installed. While this can be done via the Linstor Operator, it doesn’t hurt to manually install all the necessary tools as well.
sudo apt install software-properties-common apt-transport-https ca-certificates
sudo add-apt-repository ppa:linbit/linbit-drbd9-stack
sudo apt-get update
sudo apt install drbd-dkms drbd-utils
Deploy Cozystack:
cat > cozystack-config.yaml <<\EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: cozystack
namespace: cozy-system
data:
bundle-name: "paas-full"
ipv4-pod-cidr: "10.244.0.0/16"
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.96.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
root-host: weecodelab.nl
api-server-endpoint: https://10.1.1.10:6443
values-cilium: |
cilium:
k8sServiceHost: 10.1.1.10
k8sServicePort: 6443
EOT
Download Installer manifest and change:
(from localhost to)
env:
- name: KUBERNETES_SERVICE_HOST
value: 10.1.1.10
- name: KUBERNETES_SERVICE_PORT
value: "6443"
kubectl create ns cozy-system
kubectl apply -f cozystack-config.yaml
kubectl apply -f cozystack-installer.yaml
Linstor satellite is going to fail with modprobe: FATAL: Module drbd not found in directory /lib/modules/6.8.0-60-generic. The way to fix it:
kubectl delete LinstorSatelliteConfiguration cozystack-talos or via patching this template.
The end result:
# I did not configure ingress so it's failing
k get hr -A
NAMESPACE NAME AGE READY STATUS
cozy-cert-manager cert-manager 59m True Helm install succeeded for release cozy-cert-manager/cert-manager.v1 with chart [email protected]
cozy-cert-manager cert-manager-crds 59m True Helm install succeeded for release cozy-cert-manager/cert-manager-crds.v1 with chart [email protected]
cozy-cert-manager cert-manager-issuers 59m True Helm install succeeded for release cozy-cert-manager/cert-manager-issuers.v1 with chart [email protected]
cozy-cilium cilium 59m True Helm upgrade succeeded for release cozy-cilium/cilium.v2 with chart [email protected]+1
cozy-cilium cilium-networkpolicy 59m True Helm install succeeded for release cozy-cilium/cilium-networkpolicy.v1 with chart [email protected]
cozy-clickhouse-operator clickhouse-operator 59m True Helm install succeeded for release cozy-clickhouse-operator/clickhouse-operator.v1 with chart [email protected]
cozy-cluster-api capi-operator 59m True Helm install succeeded for release cozy-cluster-api/capi-operator.v1 with chart [email protected]
cozy-cluster-api capi-providers 59m True Helm install succeeded for release cozy-cluster-api/capi-providers.v1 with chart [email protected]
cozy-dashboard dashboard 59m True Helm upgrade succeeded for release cozy-dashboard/dashboard.v2 with chart [email protected]
cozy-etcd-operator etcd-operator 59m True Helm install succeeded for release cozy-etcd-operator/etcd-operator.v1 with chart [email protected]
cozy-fluxcd fluxcd 59m True Helm upgrade succeeded for release cozy-fluxcd/fluxcd.v2 with chart [email protected]
cozy-fluxcd fluxcd-operator 59m True Helm upgrade succeeded for release cozy-fluxcd/fluxcd-operator.v2 with chart [email protected]
cozy-goldpinger goldpinger 59m True Helm install succeeded for release cozy-goldpinger/goldpinger.v1 with chart [email protected]
cozy-grafana-operator grafana-operator 59m True Helm install succeeded for release cozy-grafana-operator/grafana-operator.v1 with chart [email protected]
cozy-kafka-operator kafka-operator 59m True Helm install succeeded for release cozy-kafka-operator/kafka-operator.v1 with chart [email protected]
cozy-kamaji kamaji 59m True Helm install succeeded for release cozy-kamaji/kamaji.v1 with chart [email protected]
cozy-kubeovn kubeovn 59m True Helm upgrade succeeded for release cozy-kubeovn/kubeovn.v2 with chart [email protected]
cozy-kubeovn kubeovn-webhook 59m True Helm install succeeded for release cozy-kubeovn/kubeovn-webhook.v1 with chart [email protected]
cozy-kubevirt-cdi kubevirt-cdi 59m True Helm install succeeded for release cozy-kubevirt-cdi/kubevirt-cdi.v1 with chart [email protected]
cozy-kubevirt-cdi kubevirt-cdi-operator 59m True Helm install succeeded for release cozy-kubevirt-cdi/kubevirt-cdi-operator.v1 with chart [email protected]
cozy-kubevirt kubevirt 59m True Helm install succeeded for release cozy-kubevirt/kubevirt.v1 with chart [email protected]
cozy-kubevirt kubevirt-instancetypes 59m True Helm install succeeded for release cozy-kubevirt/kubevirt-instancetypes.v1 with chart [email protected]
cozy-kubevirt kubevirt-operator 59m True Helm install succeeded for release cozy-kubevirt/kubevirt-operator.v1 with chart [email protected]
cozy-linstor linstor 59m True Helm install succeeded for release cozy-linstor/linstor.v1 with chart [email protected]
cozy-linstor piraeus-operator 59m True Helm install succeeded for release cozy-linstor/piraeus-operator.v1 with chart [email protected]
cozy-mariadb-operator mariadb-operator 59m True Helm install succeeded for release cozy-mariadb-operator/mariadb-operator.v1 with chart [email protected]
cozy-metallb metallb 59m True Helm install succeeded for release cozy-metallb/metallb.v1 with chart [email protected]
cozy-monitoring monitoring-agents 59m True Helm install succeeded for release cozy-monitoring/monitoring-agents.v1 with chart [email protected]
cozy-objectstorage-controller objectstorage-controller 59m True Helm install succeeded for release cozy-objectstorage-controller/objectstorage-controller.v1 with chart [email protected]
cozy-postgres-operator postgres-operator 59m True Helm install succeeded for release cozy-postgres-operator/postgres-operator.v1 with chart [email protected]
cozy-rabbitmq-operator rabbitmq-operator 59m True Helm install succeeded for release cozy-rabbitmq-operator/rabbitmq-operator.v1 with chart [email protected]
cozy-redis-operator redis-operator 59m True Helm install succeeded for release cozy-redis-operator/redis-operator.v1 with chart [email protected]
cozy-reloader reloader 59m True Helm install succeeded for release cozy-reloader/reloader.v1 with chart [email protected]
cozy-snapshot-controller snapshot-controller 59m True Helm install succeeded for release cozy-snapshot-controller/snapshot-controller.v1 with chart [email protected]
cozy-system cozy-proxy 59m True Helm install succeeded for release cozy-system/cozystack.v1 with chart [email protected]
cozy-system cozystack-api 59m True Helm install succeeded for release cozy-system/cozystack-api.v1 with chart [email protected]
cozy-system cozystack-controller 59m True Helm install succeeded for release cozy-system/cozystack-controller.v1 with chart [email protected]
cozy-vertical-pod-autoscaler vertical-pod-autoscaler 59m True Helm install succeeded for release cozy-vertical-pod-autoscaler/vertical-pod-autoscaler.v1 with chart [email protected]
cozy-vertical-pod-autoscaler vertical-pod-autoscaler-crds 59m True Helm install succeeded for release cozy-vertical-pod-autoscaler/vertical-pod-autoscaler-crds.v1 with chart [email protected]
cozy-victoria-metrics-operator victoria-metrics-operator 59m True Helm install succeeded for release cozy-victoria-metrics-operator/victoria-metrics-operator.v1 with chart [email protected]
tenant-root etcd 42m True Helm install succeeded for release tenant-root/etcd.v1 with chart [email protected]
tenant-root ingress 42m True Helm install succeeded for release tenant-root/ingress.v1 with chart [email protected]
tenant-root ingress-nginx-system 42m False Helm install failed for release tenant-root/ingress-nginx-system with chart [email protected]: context deadline exceeded
tenant-root monitoring 42m True Helm install succeeded for release tenant-root/monitoring.v1 with chart [email protected]
tenant-root tenant-root 59m True Helm upgrade succeeded for release tenant-root/tenant-root.v2 with chart [email protected]
tenant-root virtual-machine-gpu 28s True Helm install succeeded for release tenant-root/virtual-machine-gpu.v1 with chart [email protected]
@kubebn thank you! That's a fascinating guide. Why did you make the choice to use Ubuntu over Talos in the first place?
@kubebn thank you! That's a fascinating guide. Why did you make the choice to use Ubuntu over Talos in the first place?
Cheers.
I wouldn't say that choosing Ubuntu over Talos was intentional. It was more that I didn't have any other option. Otherwise, I would have definitely gone with Talos 🙂
@kubebn I cannot reproduce your example. seems that if you use paas-full budle and try to override cillium values
values-cilium: |
cilium:
k8sServiceHost: 10.1.1.10
k8sServicePort: 6443
installer.sh tried to install cilium with install_basic_charts() by
make -C packages/system/cilium apply resume
and use default cilium values files:
- values.yaml
- values-talos.yaml
- values-kubeovn.yaml
@kubebn I cannot reproduce your example. seems that if you use
paas-fullbudle and try to override cillium valuesvalues-cilium: | cilium: k8sServiceHost: 10.1.1.10 k8sServicePort: 6443installer.sh tried to install cilium with install_basic_charts() by
make -C packages/system/cilium apply resumeand use default cilium values files:
- values.yaml - values-talos.yaml - values-kubeovn.yaml
Example I took from here - https://cozystack.io/docs/operations/bundles/#how-to-overwrite-parameters-for-specific-components ; it worked for me just an hour ago, what actually is not working?
Cilium still use old values. Which cozy version do you use?
Cilium still use old values. Which cozy version do you use?
Latest installer manifest, I think its 0.30.6 there
I'm still on 0.30.4 will try today with latest available version
important edit: in RKE2 problem occur, but in kubeadm is working ;)
I confirm that in 0.30.6 values-cilium is not respected.
config:
data:
bundle-name: "paas-full"
root-host: {{ rancher_hostname }}
api-server-endpoint: https://{{ ansible_default_ipv4.address }}:6443
ipv4-pod-cidr: "10.42.0.0/16"
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.43.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
values-cilium: |
cilium:
k8sServiceHost: 127.0.0.1
k8sServicePort: 6443
In cilium-operator pods there is still default, 7445 port.
important edit: in RKE2 problem occur, but in kubeadm is working ;)
I confirm that in 0.30.6 values-cilium is not respected.
config:
data: bundle-name: "paas-full" root-host: {{ rancher_hostname }} api-server-endpoint: https://{{ ansible_default_ipv4.address }}:6443 ipv4-pod-cidr: "10.42.0.0/16" ipv4-pod-gateway: "10.244.0.1" ipv4-svc-cidr: "10.43.0.0/16" ipv4-join-cidr: "100.64.0.0/16" values-cilium: | cilium: k8sServiceHost: 127.0.0.1 k8sServicePort: 6443In cilium-operator pods there is still default, 7445 port.
In rke and k3s, its not working. Only kubeadm
Hi, @kubebn. I'm Dosu, and I'm helping the cozystack team manage their backlog and am marking this issue as stale.
Issue Summary:
- You shared a detailed demo setup example of deploying Cozystack on Ubuntu 24.04 using kubeadm.
- The setup is intended as a demo, not production-ready.
- There was a discussion about using Ubuntu instead of Talos, with your clarification that Ubuntu was chosen due to lack of alternatives.
- Other users reported issues with overriding cilium values in version 0.30.6, specifically in RKE2 and k3s, but not with kubeadm.
- This suggests a potential bug or limitation in certain deployment methods.
Next Steps:
- Please let me know if this issue is still relevant with the latest version of Cozystack by commenting here to keep the discussion open.
- Otherwise, this issue will be automatically closed in 7 days.
Thank you for your understanding and contribution!