cluster-api-addon-provider-helm
cluster-api-addon-provider-helm copied to clipboard
Unable to change namespace
What steps did you take and what happened: Deploy a HelmChartProxy and then change the namespace value. The install gets deleted from the current namespace, but it doesn't get reinstalled to the new namespace.
I found this in the logs:
E0319 23:30:02.231917 1 controller.go:329] "Reconciler error" err="Unable to continue with install: CustomResourceDefinition \"applications.argoproj.io\" in namespace \"\" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key \"meta.helm.sh/release-name\" must equal \"argo-cd-1710890996\": current value is \"argo-cd-1710874729\"; annotation validation error: key \"meta.helm.sh/release-namespace\" must equal \"argocd\": current value is \"default\"" controller="helmreleaseproxy" controllerGroup="addons.cluster.x-k8s.io" controllerKind="HelmReleaseProxy" HelmReleaseProxy="default/argo-cd-argoclustercaaph-wwfjf" namespace="default" name="argo-cd-argoclustercaaph-wwfjf" reconcileID="894ea010-a219-450a-9f76-eb4741c8de97"
What did you expect to happen: At a minimum it would be more clear what is happening. Ideally, it would just delete the install from the current namespace and install to the new namespace.
Environment:
- Cluster API version: 1.14.0
- Cluster API Add-on Provider for Helm version: v0.1.1-alpha.1
- Kubernetes version: (use
kubectl version
): docker-desktop K8s 1.29.1
/kind bug
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This still seems worthy of fixing IMHO.