kubewarden-controller icon indicating copy to clipboard operation
kubewarden-controller copied to clipboard

Remove `v1alpha2` CRDs

Open jvanz opened this issue 4 months ago • 0 comments

When adding the shortNames for our CRDs, we encounter a problem when trying to remove the old v1alpha2 CRD version. Even if the resources are migrated to v1 during the upgrade path, the CRD status keep in the storedVersions field the v1alpha2 version. Which block the upgrade removing the old version. Therefore, we need to find the best way to fix this issue to safely remove the v1alpha2 version.

To simulate the issue follow the following steps:

  1. Start clean cluster
  2. Install kubewarden-crds version 0.1.4 and kubewarden-controller version 0.4.6. These are versions before the CRDs v1 and Kubewarden v1.0.0 release
  3. Install a v1alpha2 policy. See below a policy to be used
  4. Upgrade to v1.0.0 of the crds and controller helm charts
  5. Upgrade to latest crds and controller helm charts
  6. Change a local kubewarden-crds helm chart removing the v1alpha2 version
  7. Try to upgrade the crds using my local helm chart

The following error happens:

Error: UPGRADE FAILED: cannot patch "clusteradmissionpolicies.policies.kubewarden.io" with kind CustomResourceDefinition: CustomResourceDefinition.apiextensions.k8s.io "clusteradmissionpolicies.policies.kubewarden.io" is invalid: status.storedVersions[0]: Invalid value: "v1alpha2": must appear in spec.versions

Therefore, even if the policies have been migrated to v1 during the upgrade path. The storedVersion fields is still telling that we have v1alpha2 installed. This is the field description:

storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from spec.versions while they exist in this list.

Considering this documentation. I guess our controller needs to updates this field to allow the removal of the old CRD.

In case you want to setup a similar testing environment. This is the commands used to create a cluster with old kubewarden stack version:

minikube delete --all && \
minikube start && \
helm install --wait --namespace cert-manager --create-namespace	--set crds.enabled=true cert-manager jetstack/cert-manager && \
helm install --wait -n kubewarden --create-namespace kubewarden-crds kubewarden/kubewarden-crds --version 0.1.4 && \
helm install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller --version 0.4.6 && \
k8s apply -f policy.yaml && \
k8s get clusteradmissionpolicy pod-privileged -o yaml 

The policy definition:

apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
  name: privileged-pods
spec:
  module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.3.2
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
    operations:
    - CREATE
    - UPDATE
  mutating: false

Acceptance criteria

  • change the controller to updates the storedVersions field removing the old version not in use.
  • marks the v1alpha2 API package with //+kubebuilder:skip. This will remove the version from the CRD generation
  • add test to cover the above testing steps
  • [idea] focument this for behavior for future migrations

Originally posted by @jvanz in https://github.com/kubewarden/kubewarden-controller/pull/896#discussion_r1781522909

jvanz avatar Sep 30 '24 17:09 jvanz