k9s
k9s copied to clipboard
`Helm delete` deletes only the helm entry but not the deployment.

Describe the bug
When trying to delete from helm screen (using ctrl+d
) it deletes only the release
and not the pods/deployments etc.
To Reproduce Steps to reproduce the behavior:
- create a release using
helm
- delete the
release
from thehelm
screen - check pods/deployments. you'll see that all other components are still there.
Expected behavior
Delete all resources associated with the release
I just deleted
Screenshots
If applicable, add screenshots to help explain your problem.
Versions (please complete the following information):
- OS: Ubuntu 20.04.2 LTS
- K9s Rev: v0.24.2
- K8s Rev: v1.16.15-gke.6000
@omersi Thank you for reporting this! This is a bit strange as deleting the chart should delete all related artifacts. Are you sure you don't have another deployment holding these resources? Just tried a sample pg chart and deploy/pods/sec/etc... are all getting deleted as expected. Please send more details if this is not the case. Tx!!
Getting the same behavior on my side.
I think, it used to work as expected in the begining, but now, it delete only the chart and not the k8s objects
same issue K9s Rev: v0.24.15 K8s Rev: v1.20.7
Same issue here. Normal helm delete
removes all pods, services, etc, but ctrl+d
from k9s somehow removes the helm deployment while keeping all resources.
K9s Rev: v0.24.15 K8s Rev: v1.21.5
Helm: version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"} -- Does k9s interface with my local helm install, or does it come with its own packaged version of helm?
The k9s log does not contain entry entries relating to helm but is spammed with E1007 09:53:26.272662 163 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *unstructured.Unstructured: failed to list *unstructured.Unstructured: the server could not find the requested resource
.
@derailed I just started using k9s and am not sure what other information might be useful for debugging this. Please let me know if/how I can help and where to start.
+1, maybe put a command to uninstall helm package, not only delete the reference
@derailed Note that I am seeing this after deleting via the helm CLI, outside of the context of k9s. This github issue came up when I was poking around so thought I'd post here.
helm version
version.BuildInfo{Version:"v3.8.2", GitCommit:"6e3701edea09e5d55a8ca2aae03a68917630e91b", GitTreeState:"clean", GoVersion:"go1.18.1"}
Azure Kubernetes Cluster running 1.22.6.
@derailed I'd like to report this issue too. I guess this issue should be reopened
Thank you all for piping in!! I am not able to repro this using the latest k9s v0.26.0. Installed a redis chart, delete it and all associated resources are uninstalled as expected. Please add more details here so we can find a good repro. Thank you!
Hi there! Do you have "helm.sh/resource-policy": keep in your helm charts?
I can confirm that in my case it was the resource policy issue.
@mike-code thank you for reporting back! I think we can close this issue, but maybe add this information into readme or somewhere? @derailed what do you think?
It is not resolved for me yet.
I don't have "helm.sh/resource-policy": keep
.
When I do a helm uninstall --namespace kube-system release-name
(helm 3) from console, all resources get wiped.
When I use k9s delete, then helm managed resources stay.
I use this command to check:
kubectl get all --namespace kube-system -l app.kubernetes.io/managed-by=Helm
I observed this behaviour with different charts. One of them is the sentry helm chart.
I am using k9s v0.26.3
Maybe this has something todo with namespaces?
I think this one could be related to #1558. If your default namespace is different to the selected namespace where you are deleting the helm release, the release is deleted but not the resources. At least not in the namespace you selected, but in the default. Seems like k9s does not handle the selected namespace correctly on helm release deletion.
I have the same issue.
I fixed it by making the exact same deploiement build again and then uninstall it with helm instead of delete
The same issue. k9s v0.27.4 K8s v1.24.11
Same.
+1
Still not working. K9s Rev: v0.27.4 K8s Rev: v1.26.7
We are facing the same issue. Has someone already implemented a solution in a PR that could be referenced / tracked in this issue?
Pretty please, get this fixed!
Setting the namespace explicitly for the KubeClient seems to fix this issue (PR). Not sure if this behaviour is intended by the helm package so I also opened an issue over there.
@derailed FYI seems like this bug still persists, when current context selected in ~/.kube/config differs from the one selected within k9s.