harvester
harvester copied to clipboard
[BUG] Namespace pending on terminating
Describe the bug
User can delete the namespace having resources(vm/volumes/images) under it, then it will trigger VMs be deleted automatically.
And if there have some resource can not be removed automatically, the namespace will pending on Terminating
state and can not be recovered.
To Reproduce
Steps to reproduce the behavior:
- Install Harvester with any nodes
- Create a Image in namespace
harvester-public
- Create a VM
vm1
in namespaceharvester-public
- Create a VM
vm2
in namespcaedefault
which using the Image under namespaceharvester-public
- Delete the namespace
harvester-public
Expected behavior
It is dangerous that to remove resource automatically, which means if any unexpected operation, we can't recovered those deleted volume. so I would expected that we should block and notify user there have how many resources they need to handle it by themselves.
Environment:
- Harvester ISO version: v1.0.3-rc1
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): Baremetal DL360 4 nodes
Additional context
https://user-images.githubusercontent.com/5169694/181486566-e4ff43ac-b046-499d-92be-fbc226ac8f0b.mp4
Deleting a namespace will automatically remove all the resources on that namespace is expected behavior in the k8s, we may just add more proper description on the pop-up dialog. e.g, change the words to:
Deleting the namespace will remove all the resources on this particular namespace and this is not revertable, are you sure you want to continue deleting the Namespace xxx.

But namespace pending on terminating is another use case, users should be able to delete their own namespaces correctly, sometimes it gets stuck with the finalizer stuff and it needs to be checked case by case, but would be good if we can add some useful information or at least test deleting a namespace which only contains regular resources like VM and volumes should always be successful.
Pre Ready-For-Testing Checklist
~* [ ] If labeled: require/HEP Has the Harvester Enhancement Proposal PR submitted? The HEP PR is at:~
- [X] Where is the reproduce steps/test steps documented? The reproduce steps/test steps are at: https://github.com/harvester/dashboard/pull/399
~* [ ] Is there a workaround for the issue? If so, where is it documented? The workaround is at:~
~* [ ] Have the backend code been merged (harvester, harvester-installer, etc) (including backport-needed/*
)?
The PR is at:~
* [ ] Does the PR include the explanation for the fix or the feature?
* [ ] Does the PR include deployment change (YAML/Chart)? If so, where are the PRs for both YAML file and Chart?
The PR for the YAML change is at:
The PR for the chart change is at:
- [X] If labeled: area/ui Has the UI issue filed or ready to be merged? The UI issue/PR is at: https://github.com/harvester/dashboard/pull/399
~* [ ] If labeled: require/doc, require/knowledge-base Has the necessary document PR submitted or merged? The documentation/KB PR is at:~
-
[ ] If NOT labeled: not-require/test-plan Has the e2e test plan been merged? Have QAs agreed on the automation test case? If only test case skeleton w/o implementation, have you created an implementation issue?
- The automation skeleton PR is at:
- The automation test case PR is at:
-
[ ] If the fix introduces the code for backward compatibility Has a separate issue been filed with the label
release/obsolete-compatibility
? The compatibility issue is filed at:
Automation e2e test issue: harvester/tests#436
Verified this bug has been resolved. Warning message added.
Test Information
- Environment: qemu/KVM single node
- Harvester Version: master-9631b136-head
- ui-source Option: Auto
Verify Steps:
- Install Harvester with any nodes
- Login to dashboard and navigate to Namespaces
- Trying to delete any namespaces, prompt windows should shows warning message