cloudstack
cloudstack copied to clipboard
Normalizing volume consolidation in live migration on KVM
Description
Currently, when a volume that was created with linked clone is live migrated from an NFS storage to another NFS storage in KVM, it continues to have the template as a backing file on the destination storage. In all other cases, such as NFS to SharedMountPoint, the volume is consolidated with its backing file during migration.
This special case adds unnecessary complexity to ACS; In most cases, performing the migration without consolidating the volume does not significantly optimize resource usage. On the other hand, when consolidating the volume with its backing file, the hypervisor detects empty sectors and removes them from the final volume, therefore, consolidating the volume can reduce storage usage.
This PR fixes the problem explained in #7615 and #8834, but without corrupting the volume.
Types of changes
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [X] Enhancement (improves an existing feature and functionality)
- [ ] Cleanup (Code refactoring and cleanup, that may add test cases)
- [ ] build/CI
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
- [ ] Major
- [X] Minor
Bug Severity
- [ ] BLOCKER
- [ ] Critical
- [ ] Major
- [ ] Minor
- [ ] Trivial
Screenshots (if appropriate):
How Has This Been Tested?
Before applying the changes, a VM was created using linked clone on an NFS storage, when migrating it to another NFS storage, the template was copied to the destination storage and it continued to be used by the VM as a backing file.
After applying the changes, a new VM was created under the same conditions as the old one (except for the template being different), when migrating the VM from one NFS storage to another, the VM template was not copied to the new storage and the VM volume was consolidated.
Codecov Report
Attention: Patch coverage is 0%
with 35 lines
in your changes missing coverage. Please review.
Project coverage is 15.28%. Comparing base (
f0ba905
) to head (f83c449
). Report is 65 commits behind head on 4.19.
Files with missing lines | Patch % | Lines |
---|---|---|
...torage/motion/StorageSystemDataMotionStrategy.java | 0.00% | 33 Missing :warning: |
...cloud/hypervisor/kvm/resource/MigrateKVMAsync.java | 0.00% | 2 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## 4.19 #8911 +/- ##
============================================
+ Coverage 15.07% 15.28% +0.20%
- Complexity 11168 11580 +412
============================================
Files 5406 5406
Lines 472795 490142 +17347
Branches 57834 66027 +8193
============================================
+ Hits 71282 74904 +3622
- Misses 393585 406937 +13352
- Partials 7928 8301 +373
Flag | Coverage Δ | |
---|---|---|
uitests | 5.10% <ø> (+0.79%) |
:arrow_up: |
unittests | 15.94% <0.00%> (+0.14%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@blueorangutan package
@sureshanaparti a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Packaging result [SF]: ✖️ el7 ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 9252
Packaging result [SF]: ✖️ el7 ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 9256
@blueorangutan package
@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 9262
@sureshanaparti could we run de CI here?
@blueorangutan test
@DaanHoogland a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests
[SF] Trillian Build Failed (tid-9870)
[SF] Trillian Build Failed (tid-9873)
[SF] Trillian test result (tid-9884) Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7 Total time taken: 45095 seconds Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr8911-t9884-kvm-centos7.zip Smoke tests completed. 127 look OK, 2 have errors, 0 did not run Only failed and skipped tests results shown below:
Test | Result | Time (s) | Test File |
---|---|---|---|
test_01_events_resource | Error |
296.55 | test_events_resource.py |
test_01_events_resource | Error |
296.56 | test_events_resource.py |
test_04_deploy_vm_for_other_user_and_test_vm_operations | Failure |
85.55 | test_network_permissions.py |
ContextSuite context=TestNetworkPermissions>:teardown | Error |
1.39 | test_network_permissions.py |
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.
@blueorangutan package
@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Packaging result [SF]: ✖️ el7 ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 9536
@blueorangutan package
@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Packaging result [SF]: ✔️ el7 ✖️ el8 ✖️ el9 ✔️ debian ✖️ suse15. SL-JID 9547
Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 9557
@DaanHoogland @sureshanaparti could we run the CI here?
@blueorangutan test
@DaanHoogland a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests
[SF] Trillian Build Failed (tid-10207)
@blueorangutan package
@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 9640
@DaanHoogland @sureshanaparti could we try again?