Upgrade 3.7.0->3.8.0 fails when there exists a volumesnapshotclass
When i try to upgrade, i get:
# helm -n openebs upgrade openebs openebs/openebs --reuse-values --version 3.8.0
false
Error: UPGRADE FAILED: Unable to continue with update: CustomResourceDefinition "volumesnapshotclasses.snapshot.storage.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "openebs"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "openebs"
Actually i have one volumesnapshotclass:
# kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
NAME DRIVER DELETIONPOLICY AGE
longhorn-snapshot-vsc driver.longhorn.io Delete 132d
# kubectl get volumesnapshotclasses.snapshot.storage.k8s.io longhorn-snapshot-vsc -o yaml
apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Delete
driver: driver.longhorn.io
kind: VolumeSnapshotClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"snapshot.storage.k8s.io/v1","deletionPolicy":"Delete","driver":"driver.longhorn.io","kind":"VolumeSnapshotClass","metadata":{"annotations":{},"labels":{"velero.io/csi-volumesnapshot-class":"true"},"name":"longhorn-snapshot-vsc"},"parameters":{"type":"snap"}}
creationTimestamp: "2023-05-23T08:24:38Z"
generation: 2
labels:
velero.io/csi-volumesnapshot-class: "true"
managedFields:
- apiVersion: snapshot.storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:deletionPolicy: {}
f:driver: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:velero.io/csi-volumesnapshot-class: {}
f:parameters:
.: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-06-28T12:59:12Z"
name: longhorn-snapshot-vsc
resourceVersion: "105590151"
uid: b4206632-3c4b-459f-a0ff-d5324b9a2b0f
parameters:
type: snap
But this is from longhorn, not openebs.
How can i get over this?
I'm running into the same problem, when upgrading OpenEBS from 3.7.0 to 3.9.0, it's complaining there's already a volumesnapshotclass, however that's in a different namespace for a different storage class.
Same issue with RKE2. Disabling the classes will bring another set of errors, although it the result is an OpenEBS that does work:
- echo 'Installing helm_v3 chart'
- helm_v3 install --namespace openebs --set-string global.clusterCIDR=10.42.0.0/16 --set-string global.clusterCIDRv4=10.42.0.0/16 --set-string global.clusterDNS=10.43.0.10 --set-string global.clusterDomain=cluster.local --set-string global.rke2DataDir=/var/lib/rancher/rke2 --set-string global.serviceCIDR=10.43.0.0/16 openebs openebs/openebs --values /config/values-01_HelmChart.yaml
Error: INSTALLATION FAILED: 3 errors occurred:
- customresourcedefinitions.apiextensions.k8s.io "volumesnapshotcontents.snapshot.storage.k8s.io" already exists
- customresourcedefinitions.apiextensions.k8s.io "volumesnapshotclasses.snapshot.storage.k8s.io" already exists
- customresourcedefinitions.apiextensions.k8s.io "volumesnapshots.snapshot.storage.k8s.io" already exists
I'm seeing the same error when I deploy the lvm-operator like this:
kubectl create -f https://raw.githubusercontent.com/openebs/charts/gh-pages/versioned/3.10.0/lvm-operator.yaml
If I change from create to apply I don't get the error but I think that may just be masking the problem.
EDIT: the issue with rke2 is that it bundles snapshot-controller from upstream which conflicts with what OpenEBS is installing. rke2 bundled snapshot-controller can be disabled at install time, see: https://github.com/rancher/rke2-docs/issues/123