cloudstack
cloudstack copied to clipboard
[Veeam] enable volume attach/detach in VMs with Backup Offerings
Description
Using the VMWare hypervisor, with the Veeam plugin active, it is not possible to attach or detach volumes on VMs attached in a Backup Offering. This PR aims to externalize the global configuration backup.enable.attach.detach.of.volumes
to allow operators to define if it's possible or not to attach/detach volumes in VMs assigned to a Backup Offering.
Types of changes
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] Enhancement (improves an existing feature and functionality)
- [ ] Cleanup (Code refactoring and cleanup, that may add test cases)
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
- [ ] Major
- [x] Minor
How Has This Been Tested?
It was tested in a local lab:
- I created a new VM and attached this VM to a Backup Offering;
- I tried to attach a new volume in this VM;
- Before, an exception was thrown;
- Now, with the global configuration
backup.enable.attach.detach.of.volumes
as true, it's possible to attach/detach volumes in VMs assigned to a Backup Offering. Also, I added new unit tests.
Found UI changes, kicking a new UI QA build @blueorangutan ui
@acs-robot a Jenkins job has been kicked to build UI QA env. I'll keep you posted as I make progress.
UI build: :heavy_check_mark: Live QA URL: http://qa.cloudstack.cloud:8080/client/pr/6581 (SL-JID-2023)
@SadiJr this sounds like a good enhancement. Did you test the backup functionality on VMs that had volumes attached/detacjhed most particularly; what if a volume is detached and after that a restore is done to a point where it was still attached; does this work?
@DaanHoogland I tested to make backups of one VM, then attach a new volume, and make the backup again, to make sure this new attached volume appears in Veeam. I do the same test, but detaching the volume and make another backup. The full restore, for my tests, works well. The volume restore, for reasons I don't now, restores the entire VM with another name and then detach and attach the selected volume to the VM but, even if this process sounds wrong to me (as Veeam has the function to restore only a specific volume to a specific VM), in my tests, it worked correctly. But I would particularly be happy if someone else could also test it to ensure consistency
Found UI changes, kicking a new UI QA build @blueorangutan ui
@acs-robot a Jenkins job has been kicked to build UI QA env. I'll keep you posted as I make progress.
UI build: :heavy_check_mark: Live QA URL: http://qa.cloudstack.cloud:8080/client/pr/6581 (SL-JID-2071)
Unfortunately I don´t have a setup with veeam, so my tests won´t help to assess the code/functionality. @blueorangutan package
@DaanHoogland a Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Packaging result: :heavy_check_mark: el7 :heavy_check_mark: el8 :heavy_check_mark: debian :heavy_check_mark: suse15. SL-JID 3901
@blueorangutan test centos7 vmware-67u3
@DaanHoogland a Trillian-Jenkins test job (centos7 mgmt + vmware-67u3) has been kicked to run smoke tests
Trillian test result (tid-4619) Environment: vmware-67u3 (x2), Advanced Networking with Mgmt server 7 Total time taken: 43275 seconds Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr6581-t4619-vmware-67u3.zip Smoke tests completed. 100 look OK, 1 have errors Only failed tests results shown below:
Test | Result | Time (s) | Test File |
---|---|---|---|
test_01_invalid_upgrade_kubernetes_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
test_02_upgrade_kubernetes_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
test_03_deploy_and_scale_kubernetes_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
test_04_autoscale_kubernetes_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
test_05_basic_lifecycle_kubernetes_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
test_06_delete_kubernetes_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
test_07_deploy_kubernetes_ha_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
test_08_upgrade_kubernetes_ha_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
test_09_delete_kubernetes_ha_cluster | Failure |
0.01 | test_kubernetes_clusters.py |
@shwstppr should we consider these errors in assessing this PR?
Trillian test result (tid-4619) Environment: vmware-67u3 (x2), Advanced Networking with Mgmt server 7 Total time taken: 43275 seconds Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr6581-t4619-vmware-67u3.zip Smoke tests completed. 100 look OK, 1 have errors Only failed tests results shown below: Test Result Time (s) Test File test_01_invalid_upgrade_kubernetes_cluster
Failure
0.01 test_kubernetes_clusters.py test_02_upgrade_kubernetes_clusterFailure
0.01 test_kubernetes_clusters.py test_03_deploy_and_scale_kubernetes_clusterFailure
0.01 test_kubernetes_clusters.py test_04_autoscale_kubernetes_clusterFailure
0.01 test_kubernetes_clusters.py test_05_basic_lifecycle_kubernetes_clusterFailure
0.01 test_kubernetes_clusters.py test_06_delete_kubernetes_clusterFailure
0.01 test_kubernetes_clusters.py test_07_deploy_kubernetes_ha_clusterFailure
0.01 test_kubernetes_clusters.py test_08_upgrade_kubernetes_ha_clusterFailure
0.01 test_kubernetes_clusters.py test_09_delete_kubernetes_ha_clusterFailure
0.01 test_kubernetes_clusters.py
@DaanHoogland checked the logs. Seems like some intermittent issue with the env. We may run tests again
@blueorangutan test centos7 vmware-67u3
@DaanHoogland a Trillian-Jenkins test job (centos7 mgmt + vmware-67u3) has been kicked to run smoke tests
Trillian test result (tid-4626) Environment: vmware-67u3 (x2), Advanced Networking with Mgmt server 7 Total time taken: 42221 seconds Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr6581-t4626-vmware-67u3.zip Smoke tests completed. 101 look OK, 0 have errors Only failed tests results shown below:
Test | Result | Time (s) | Test File |
---|
@GutoVeronezi are your concerns with this met (a.k.a. can you approve)?
@GutoVeronezi are your concerns with this met (a.k.a. can you approve)?
Sorry for the delay, @DaanHoogland. Just reviewed again.
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.
@SadiJr can you look at the comments and the conflicts?
Codecov Report
Merging #6581 (3a34702) into main (5665782) will increase coverage by
0.00%
. The diff coverage is15.06%
.
@@ Coverage Diff @@
## main #6581 +/- ##
=========================================
Coverage 11.76% 11.77%
- Complexity 7661 7669 +8
=========================================
Files 2503 2503
Lines 245958 246014 +56
Branches 38374 38386 +12
=========================================
+ Hits 28946 28974 +28
- Misses 213248 213273 +25
- Partials 3764 3767 +3
Impacted Files | Coverage Δ | |
---|---|---|
.../main/java/com/cloud/capacity/CapacityManager.java | 100.00% <ø> (ø) |
|
...java/com/cloud/agent/manager/AgentManagerImpl.java | 4.94% <0.00%> (ø) |
|
...cloud/agent/manager/ClusteredAgentManagerImpl.java | 0.00% <0.00%> (ø) |
|
...n/java/com/cloud/vm/VirtualMachineManagerImpl.java | 5.89% <0.00%> (ø) |
|
...ine/cloud/entity/api/VirtualMachineEntityImpl.java | 0.00% <ø> (ø) |
|
...tack/engine/orchestration/NetworkOrchestrator.java | 6.11% <ø> (ø) |
|
...stack/engine/orchestration/VolumeOrchestrator.java | 0.00% <ø> (ø) |
|
...java/com/cloud/ovm/hypervisor/OvmResourceBase.java | 0.00% <0.00%> (ø) |
|
...rvisor/ovm3/resources/helpers/Ovm3StoragePool.java | 0.00% <ø> (ø) |
|
...a/org/apache/cloudstack/api/ListUcsManagerCmd.java | 0.00% <0.00%> (ø) |
|
... and 12 more |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
@SadiJr will you consider the coments as well?
@GutoVeronezi done, thanks for the suggestions. @DaanHoogland done to, thanks for the advice.
seeing test_affinity_groups_projects failure on several PRs, not specificly this one. see #7098
@blueorangutan package