cluster-template icon indicating copy to clipboard operation
cluster-template copied to clipboard

Upgrading Calico

Open onedr0p opened this issue 2 years ago • 11 comments

Details

Describe the solution you'd like:

Document a way to upgrade Calico, for now the process can be done by running against an already provisioned cluster

kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/tigera-operator.yaml

After it is upgraded it is wise to manually bump the version in the Ansible config:

https://github.com/k8s-at-home/template-cluster-k3s/blob/63d077e1dd50cb0ae9af5c21d951bec1d78c60ad/provision/ansible/inventory/group_vars/kubernetes/k3s.yml#L31

onedr0p avatar May 10 '22 11:05 onedr0p

Would using tigers operator be an acceptable solution? I can open a PR tomorrow if that's the case. That way renovate or flux automation can stay on top of updates

Example

h3mmy avatar Jun 19 '22 03:06 h3mmy

I'm taking over tigera-operator with helm too but it's not ideal because you manually need to apply the helm ownership labels to the CRDs and resources or else it will not install.

See my notes on deploying the helm chart:

https://github.com/onedr0p/home-ops/issues/3385

onedr0p avatar Jun 19 '22 10:06 onedr0p

I would be more inclined to support the method of installing Calico with the k3s HelmChart CRD and then take it over with a Flux HelmRelease but I haven't had time to explore this much

onedr0p avatar Jun 19 '22 12:06 onedr0p

That's fair to want to support upgrades. Could also add a job to do the relabeling. I already havea a messy bash script I can clean up for use: https://github.com/h3mmy/bloopySphere/blob/main/fix-crd.sh

I'll check out the rancher HelmChart option

h3mmy avatar Jun 19 '22 14:06 h3mmy

Combing through the process, using the k3s HelmChart just seems like it's adding an extra step since the relabeling would still need to be performed with a Patch or Job.

h3mmy avatar Jun 22 '22 13:06 h3mmy

That's a bummer, I was hoping that it would add in the annotations for us.

onedr0p avatar Jun 22 '22 16:06 onedr0p

I'll try a dry run when I'm able. Just want to make sure.

h3mmy avatar Jun 23 '22 04:06 h3mmy

Right now this component is in limbo - it is not manged by either k3s or flux.

Could we perhaps apply the helm ownership labels to the tigera-operator manifest on the ansible side, when it is first deployed to the cluster?

haraldkoch avatar Aug 05 '22 17:08 haraldkoch

I am not sure the best way forward to be honest, right now there's two methods:

  1. Apply the new manifests with kubectl

    kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/tigera-operator.yaml
    
  2. Patch calico resources to add the helm ownership and then apply the HelmRelease or helm chart

    kubectl patch installation default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch installation default --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch installation default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    

Having a Ansible playbook just for applying the patches might be annoying to maintain moving forward, like calico adding another resource that needs to be patched or whatever.

Ideally I would like to switch to Cilium but I am dead set on them implementing BGP without metallb hacks before I consider it.

onedr0p avatar Aug 05 '22 17:08 onedr0p

I was going to suggest scripting that to check what CRDs require patching and run a templated task. But that may be equally annoying to maintain. I'm hoping to switch to Cilium at some point as well. I'm currently trying to figure out how to transition the cluster to BGP first.

h3mmy avatar Aug 07 '22 14:08 h3mmy

I had to do those as well

kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'

Diaoul avatar Sep 04 '22 12:09 Diaoul

Noting for anyone that stumbles onto this thread and has the following error when trying to kubectl apply -f a newer version

The CustomResourceDefinition "installations.operator.tigera.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

https://github.com/projectcalico/calico/issues/6491 You'll want to use kubectl replace since these are CRDs.

sp3nx0r avatar Sep 24 '22 21:09 sp3nx0r