kubewarden-controller
kubewarden-controller copied to clipboard
Remove `v1alpha2` CRDs
When adding the shortNames
for our CRDs, we encounter a problem when trying to remove the old v1alpha2
CRD version. Even if the resources are migrated to v1
during the upgrade path, the CRD status keep in the storedVersions
field the v1alpha2
version. Which block the upgrade removing the old version. Therefore, we need to find the best way to fix this issue to safely remove the v1alpha2
version.
To simulate the issue follow the following steps:
- Start clean cluster
- Install kubewarden-crds version
0.1.4
and kubewarden-controller version0.4.6
. These are versions before the CRDsv1
and Kubewardenv1.0.0
release - Install a
v1alpha2
policy. See below a policy to be used - Upgrade to
v1.0.0
of the crds and controller helm charts - Upgrade to latest crds and controller helm charts
- Change a local kubewarden-crds helm chart removing the
v1alpha2
version - Try to upgrade the crds using my local helm chart
The following error happens:
Error: UPGRADE FAILED: cannot patch "clusteradmissionpolicies.policies.kubewarden.io" with kind CustomResourceDefinition: CustomResourceDefinition.apiextensions.k8s.io "clusteradmissionpolicies.policies.kubewarden.io" is invalid: status.storedVersions[0]: Invalid value: "v1alpha2": must appear in spec.versions
Therefore, even if the policies have been migrated to v1
during the upgrade path. The storedVersion
fields is still telling that we have v1alpha2
installed. This is the field description:
storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from
spec.versions
while they exist in this list.
Considering this documentation. I guess our controller needs to updates this field to allow the removal of the old CRD.
In case you want to setup a similar testing environment. This is the commands used to create a cluster with old kubewarden stack version:
minikube delete --all && \
minikube start && \
helm install --wait --namespace cert-manager --create-namespace --set crds.enabled=true cert-manager jetstack/cert-manager && \
helm install --wait -n kubewarden --create-namespace kubewarden-crds kubewarden/kubewarden-crds --version 0.1.4 && \
helm install --wait -n kubewarden kubewarden-controller kubewarden/kubewarden-controller --version 0.4.6 && \
k8s apply -f policy.yaml && \
k8s get clusteradmissionpolicy pod-privileged -o yaml
The policy definition:
apiVersion: policies.kubewarden.io/v1alpha2
kind: ClusterAdmissionPolicy
metadata:
name: privileged-pods
spec:
module: registry://ghcr.io/kubewarden/policies/pod-privileged:v0.3.2
rules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
operations:
- CREATE
- UPDATE
mutating: false
Acceptance criteria
- change the controller to updates the
storedVersions
field removing the old version not in use. - marks the
v1alpha2
API package with//+kubebuilder:skip
. This will remove the version from the CRD generation - add test to cover the above testing steps
- [idea] focument this for behavior for future migrations
Originally posted by @jvanz in https://github.com/kubewarden/kubewarden-controller/pull/896#discussion_r1781522909