cloudstack
cloudstack copied to clipboard
[KVM] CPU Features for System VMs
Description
Currently, when defining the CPU configuration of VMs with KVM, the Apache CloudStack Agent executes the following workflow:
https://github.com/apache/cloudstack/blob/41b4f0afd5321e987973b615b566365e48228c6e/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java#L2980-L2993
As can be noticed, the CPU features are only considered for end-user VMs; they are completely ignored for system VMs. This can lead to system VMs deployment inconsistencies and errors. For instance, when it is required to disable CPU flags for a given CPU model, because the host CPU does not support such flags, an error similar to the following will be returned by Libvirt when trying to deploy system VMs:
Error while deploying VM. org.libvirt.LibvirtException: the CPU is incompatible with host CPU: Host CPU does not provide required features: hle, rtm, avx512-bf16, taa-no
~~Therefore, this PR proposes to add a new property, called systemvm.guest.cpu.features, to define CPU features for system VMs.~~
(Edit) Therefore, this PR proposes to consider the CPU features defined in the guest.cpu.features property when provisioning system VMs.
Types of changes
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [X] Enhancement (improves an existing feature and functionality)
- [ ] Cleanup (Code refactoring and cleanup, that may add test cases)
- [ ] build/CI
- [ ] test (unit or integration test code)
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
- [ ] Major
- [X] Minor
Screenshots (if appropriate):
How Has This Been Tested?
- Defined the following properties in the
agent.propertiesof the KVM hosts:
guest.cpu.mode=custom
guest.cpu.model=Skylake-Client-IBRS
guest.cpu.features=-vmx-exit-clear-bndcfgs -vmx-entry-load-bndcfgs -hle -rtm -mpx
- Restarted the Apache CloudStack Agent and verified that the deployment of system VMs was successfully accomplished.
virsh dumpxml --domain r-15-VM
<domain type='kvm' id='5'>
<name>r-15-VM</name>
<uuid>ff5816d4-4e10-4326-9d5a-778566b00770</uuid>
<description>Debian GNU/Linux 12 (64-bit)</description>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static'>1</vcpu>
<cputune>
<shares>334</shares>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>Apache Software Foundation</entry>
<entry name='product'>CloudStack KVM Hypervisor</entry>
<entry name='serial'>ff5816d4-4e10-4326-9d5a-778566b00770</entry>
<entry name='uuid'>ff5816d4-4e10-4326-9d5a-778566b00770</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-8.2'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Skylake-Client-IBRS</model>
<topology sockets='1' dies='1' cores='1' threads='1'/>
<feature policy='disable' name='vmx-exit-clear-bndcfgs'/>
<feature policy='disable' name='vmx-entry-load-bndcfgs'/>
<feature policy='disable' name='hle'/>
<feature policy='disable' name='rtm'/>
<feature policy='disable' name='mpx'/>
<feature policy='require' name='hypervisor'/>
</cpu>
<clock offset='utc'>
<timer name='kvmclock'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/mnt/10d28cdf-71a7-33ad-802e-f4ec9042e4fd/3db54927-c05d-4af9-8619-ab7e0fe23733' index='2'/>
<backingStore type='file' index='3'>
<format type='qcow2'/>
<source file='/mnt/10d28cdf-71a7-33ad-802e-f4ec9042e4fd/551def03-d35f-4f45-a584-6d0bc425c61c'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<serial>3db54927c05d4af98619</serial>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='02:01:00:cd:00:02'/>
<source bridge='brenp1s0-540'/>
<bandwidth>
<inbound average='25600' peak='25600'/>
<outbound average='25600' peak='25600'/>
</bandwidth>
<target dev='vnet11'/>
<model type='virtio'/>
<link state='up'/>
<alias name='net0'/>
<rom bar='off' file=''/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='0e:00:a9:fe:59:98'/>
<source bridge='cloud0'/>
<target dev='vnet12'/>
<model type='virtio'/>
<link state='up'/>
<alias name='net1'/>
<rom bar='off' file=''/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='1e:00:9c:00:00:0e'/>
<source bridge='cloudbr0'/>
<bandwidth>
<inbound average='25600' peak='25600'/>
<outbound average='25600' peak='25600'/>
</bandwidth>
<target dev='vnet13'/>
<model type='virtio'/>
<link state='up'/>
<alias name='net2'/>
<rom bar='off' file=''/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/4'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/4'>
<source path='/dev/pts/4'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/r-15-VM.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<graphics type='vnc' port='5902' autoport='yes' listen='192.168.122.200'>
<listen type='address' address='192.168.122.200'/>
</graphics>
<audio id='1' type='none'/>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<watchdog model='i6300esb' action='none'>
<alias name='watchdog0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</watchdog>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+0:+0</label>
<imagelabel>+0:+0</imagelabel>
</seclabel>
</domain>
- Verified that the deployment of end user VMs was successfully accomplished.
virsh dumpxml --domain i-2-14-VM
<domain type='kvm' id='6'>
<name>i-2-14-VM</name>
<uuid>d7373e69-bfcd-4baa-bda5-e39ba0c1e122</uuid>
<description>Ubuntu 18.04 LTS</description>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static'>1</vcpu>
<cputune>
<shares>334</shares>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>Apache Software Foundation</entry>
<entry name='product'>CloudStack KVM Hypervisor</entry>
<entry name='serial'>d7373e69-bfcd-4baa-bda5-e39ba0c1e122</entry>
<entry name='uuid'>d7373e69-bfcd-4baa-bda5-e39ba0c1e122</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-8.2'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Skylake-Client-IBRS</model>
<topology sockets='1' dies='1' cores='1' threads='1'/>
<feature policy='disable' name='vmx-exit-clear-bndcfgs'/>
<feature policy='disable' name='vmx-entry-load-bndcfgs'/>
<feature policy='disable' name='hle'/>
<feature policy='disable' name='rtm'/>
<feature policy='disable' name='mpx'/>
<feature policy='require' name='hypervisor'/>
</cpu>
<clock offset='utc'>
<timer name='kvmclock'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/mnt/10d28cdf-71a7-33ad-802e-f4ec9042e4fd/3aba3575-61b4-4247-8464-c08adafe9496' index='2'/>
<backingStore type='file' index='3'>
<format type='qcow2'/>
<source file='/mnt/10d28cdf-71a7-33ad-802e-f4ec9042e4fd/e751bd6d-d4c0-486f-a61b-110876c1784d'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<serial>3aba357561b442478464</serial>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='02:01:00:cd:00:01'/>
<source bridge='brenp1s0-540'/>
<bandwidth>
<inbound average='25600' peak='25600'/>
<outbound average='25600' peak='25600'/>
</bandwidth>
<target dev='vnet14'/>
<model type='virtio'/>
<link state='up'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/7'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/7'>
<source path='/dev/pts/7'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/i-2-14-VM.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<graphics type='vnc' port='5904' autoport='yes' listen='192.168.122.200'>
<listen type='address' address='192.168.122.200'/>
</graphics>
<audio id='1' type='none'/>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<watchdog model='i6300esb' action='none'>
<alias name='watchdog0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</watchdog>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+0:+0</label>
<imagelabel>+0:+0</imagelabel>
</seclabel>
</domain>
@blueorangutan package
@bernardodemarco a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Codecov Report
:x: Patch coverage is 88.88889% with 1 line in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 17.17%. Comparing base (86827f8) to head (739c4e7).
:warning: Report is 127 commits behind head on main.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| ...ervisor/kvm/resource/LibvirtComputingResource.java | 88.88% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #10964 +/- ##
=========================================
Coverage 17.17% 17.17%
- Complexity 14985 14987 +2
=========================================
Files 5869 5869
Lines 521590 521591 +1
Branches 63485 63481 -4
=========================================
+ Hits 89562 89566 +4
+ Misses 421962 421959 -3
Partials 10066 10066
| Flag | Coverage Δ | |
|---|---|---|
| uitests | 3.75% <ø> (ø) |
|
| unittests | 18.15% <88.88%> (+<0.01%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13638
@blueorangutan test
@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests
@bernardodemarco , as per your description, would it not make sense to always apply guest.cpu.features to systemvms as well? What is the backwards incompatibilty you fear? It feels to me like this is not user facing and shouldn’t be an issue, only a fix.
What is the backwards incompatibilty you fear?
@DaanHoogland, the backwards incompatibility lies in the fact that the guest.cpu.features property is currently used to define CPU flags only for end-user VMs. If we change it to also apply to system VMs, operators would lose the ability to set flags exclusively for end-user VMs.
@bernardodemarco , as per your description, would it not make sense to always apply guest.cpu.features to systemvms as well?
Yes, it would. I can update the PR tomorrow to reflect this. What are your thoughts?
[SF] Trillian test result (tid-13482) Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8 Total time taken: 61395 seconds Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr10964-t13482-kvm-ol8.zip Smoke tests completed. 140 look OK, 1 have errors, 0 did not run Only failed and skipped tests results shown below:
| Test | Result | Time (s) | Test File |
|---|---|---|---|
| test_01_redundant_vpc_site2site_vpn | Failure |
483.00 | test_vpc_vpn.py |
What is the backwards incompatibilty you fear?
@DaanHoogland, the backwards incompatibility lies in the fact that the
guest.cpu.featuresproperty is currently used to define CPU flags only for end-user VMs. If we change it to also apply to system VMs, operators would lose the ability to set flags exclusively for end-user VMs.@bernardodemarco , as per your description, would it not make sense to always apply guest.cpu.features to systemvms as well?
Yes, it would. I can update the PR tomorrow to reflect this. What are your thoughts?
I do not know what would be wisdom here.
- is a different set of settings needed for VMs and systemVMs?
- would you ever want to not apply settings to systemVMs?
intiutively, I’d just apply the user VM settings to systemVMs as well.
intiutively, I’d just apply the user VM settings to systemVMs as well.
+1
intiutively, I’d just apply the user VM settings to systemVMs as well.
+1
Ok, nice. ASAP I'll change the PR to address that
Ok, nice. ASAP I'll change the PR to address that
@DaanHoogland, @weizhouapache, done!
@blueorangutan package
@bernardodemarco a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13770
@blueorangutan test
@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests
[SF] Trillian test result (tid-13522) Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8 Total time taken: 90080 seconds Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr10964-t13522-kvm-ol8.zip Smoke tests completed. 102 look OK, 39 have errors, 0 did not run Only failed and skipped tests results shown below:
| Test | Result | Time (s) | Test File |
|---|---|---|---|
| test_nic_secondaryip_add_remove | Error |
22.89 | test_multipleips_per_nic.py |
| test_network_acl | Error |
2.38 | test_network_acl.py |
| test_01_verify_ipv6_network | Error |
3.17 | test_network_ipv6.py |
| test_01_verify_ipv6_network | Error |
3.17 | test_network_ipv6.py |
| test_03_network_operations_on_created_vm_of_otheruser | Error |
2.67 | test_network_permissions.py |
| test_03_network_operations_on_created_vm_of_otheruser | Error |
2.67 | test_network_permissions.py |
| test_04_deploy_vm_for_other_user_and_test_vm_operations | Failure |
1.54 | test_network_permissions.py |
| ContextSuite context=TestNetworkPermissions>:teardown | Error |
1.62 | test_network_permissions.py |
| test_delete_account | Error |
22.70 | test_network.py |
| test_delete_network_while_vm_on_it | Error |
2.50 | test_network.py |
| test_delete_network_while_vm_on_it | Error |
2.50 | test_network.py |
| test_deploy_vm_l2network | Error |
2.54 | test_network.py |
| test_deploy_vm_l2network | Error |
2.54 | test_network.py |
| test_l2network_restart | Error |
3.58 | test_network.py |
| test_l2network_restart | Error |
3.58 | test_network.py |
| ContextSuite context=TestL2Networks>:teardown | Error |
4.70 | test_network.py |
| ContextSuite context=TestPortForwarding>:setup | Error |
11.72 | test_network.py |
| ContextSuite context=TestPublicIP>:setup | Error |
12.69 | test_network.py |
| test_reboot_router | Error |
7.47 | test_network.py |
| test_releaseIP | Error |
7.03 | test_network.py |
| test_releaseIP_using_IP | Error |
7.41 | test_network.py |
| ContextSuite context=TestRouterRules>:setup | Error |
14.36 | test_network.py |
| test_01_deployVMInSharedNetwork | Failure |
1.34 | test_network.py |
| test_02_verifyRouterIpAfterNetworkRestart | Failure |
1.11 | test_network.py |
| test_03_destroySharedNetwork | Failure |
1.10 | test_network.py |
| ContextSuite context=TestSharedNetwork>:teardown | Error |
2.24 | test_network.py |
| test_01_deployVMInSharedNetwork | Failure |
1.46 | test_network.py |
| ContextSuite context=TestSharedNetworkWithConfigDrive>:teardown | Error |
2.56 | test_network.py |
| test_01_nic | Error |
56.23 | test_nic.py |
| test_01_non_strict_host_anti_affinity | Error |
3.62 | test_nonstrict_affinity_group.py |
| test_02_non_strict_host_affinity | Error |
2.59 | test_nonstrict_affinity_group.py |
| ContextSuite context=TestIsolatedNetworksPasswdServer>:setup | Error |
0.00 | test_password_server.py |
| test_01_isolated_persistent_network | Error |
0.22 | test_persistent_network.py |
| test_02_L2_persistent_network | Error |
1.25 | test_persistent_network.py |
| test_03_deploy_and_destroy_VM_and_verify_network_resources_persist | Failure |
2.50 | test_persistent_network.py |
| test_03_deploy_and_destroy_VM_and_verify_network_resources_persist | Error |
2.50 | test_persistent_network.py |
| ContextSuite context=TestL2PersistentNetworks>:teardown | Error |
2.56 | test_persistent_network.py |
| test_01_create_delete_portforwarding_fornonvpc | Error |
7.02 | test_portforwardingrules.py |
| test_01_add_primary_storage_disabled_host | Error |
0.28 | test_primary_storage.py |
| test_01_primary_storage_nfs | Error |
0.23 | test_primary_storage.py |
| ContextSuite context=TestStorageTags>:setup | Error |
0.40 | test_primary_storage.py |
| test_01_primary_storage_scope_change | Error |
0.11 | test_primary_storage_scope.py |
| test_01_vpc_privategw_acl | Failure |
7.84 | test_privategw_acl.py |
| test_02_vpc_privategw_static_routes | Failure |
7.49 | test_privategw_acl.py |
| test_03_vpc_privategw_restart_vpc_cleanup | Failure |
8.99 | test_privategw_acl.py |
| test_04_rvpc_privategw_static_routes | Failure |
7.79 | test_privategw_acl.py |
| test_09_project_suspend | Error |
2.56 | test_projects.py |
| test_10_project_activation | Error |
2.43 | test_projects.py |
| test_01_purge_expunged_api_vm_start_date | Error |
3.59 | test_purge_expunged_vms.py |
| test_02_purge_expunged_api_vm_end_date | Error |
3.12 | test_purge_expunged_vms.py |
| test_03_purge_expunged_api_vm_start_end_date | Error |
1.86 | test_purge_expunged_vms.py |
| test_04_purge_expunged_api_vm_no_date | Error |
2.03 | test_purge_expunged_vms.py |
| test_05_purge_expunged_vm_service_offering | Error |
1.47 | test_purge_expunged_vms.py |
| test_06_purge_expunged_vm_background_task | Error |
356.88 | test_purge_expunged_vms.py |
| test_CRUD_operations_userdata | Error |
1523.11 | test_register_userdata.py |
| test_deploy_vm_with_registered_userdata | Error |
7.93 | test_register_userdata.py |
| test_deploy_vm_with_registered_userdata_with_override_policy_allow | Error |
7.91 | test_register_userdata.py |
| test_deploy_vm_with_registered_userdata_with_override_policy_append | Error |
7.48 | test_register_userdata.py |
| test_deploy_vm_with_registered_userdata_with_override_policy_deny | Error |
7.93 | test_register_userdata.py |
| test_deploy_vm_with_registered_userdata_with_params | Error |
7.49 | test_register_userdata.py |
| test_link_and_unlink_userdata_to_template | Error |
8.57 | test_register_userdata.py |
| test_user_userdata_crud | Error |
7.75 | test_register_userdata.py |
| ContextSuite context=TestResetVmOnReboot>:setup | Error |
0.00 | test_reset_vm_on_reboot.py |
| ContextSuite context=TestRAMCPUResourceAccounting>:setup | Error |
0.00 | test_resource_accounting.py |
| ContextSuite context=TestResourceNames>:setup | Error |
0.00 | test_resource_names.py |
| ContextSuite context=TestRestoreVM>:setup | Error |
0.00 | test_restore_vm.py |
| ContextSuite context=TestRouterDHCPHosts>:setup | Error |
0.00 | test_router_dhcphosts.py |
| ContextSuite context=TestRouterDHCPOpts>:setup | Error |
0.00 | test_router_dhcphosts.py |
| ContextSuite context=TestRouterDns>:setup | Error |
0.00 | test_router_dns.py |
| ContextSuite context=TestRouterDnsService>:setup | Error |
0.00 | test_router_dnsservice.py |
| ContextSuite context=TestRouterIpTablesPolicies>:setup | Error |
0.00 | test_routers_iptables_default_policy.py |
| ContextSuite context=TestVPCIpTablesPolicies>:setup | Error |
0.00 | test_routers_iptables_default_policy.py |
| test_01_migrate_vm_strict_tags_success | Error |
0.27 | test_vm_strict_host_tags.py |
| test_02_migrate_vm_strict_tags_failure | Error |
0.27 | test_vm_strict_host_tags.py |
| test_01_restore_vm_strict_tags_success | Error |
0.29 | test_vm_strict_host_tags.py |
| test_02_restore_vm_strict_tags_failure | Error |
0.29 | test_vm_strict_host_tags.py |
| test_01_scale_vm_strict_tags_success | Error |
0.26 | test_vm_strict_host_tags.py |
| test_02_scale_vm_strict_tags_failure | Error |
0.28 | test_vm_strict_host_tags.py |
| test_01_deploy_vm_on_specific_host_without_strict_tags | Error |
0.27 | test_vm_strict_host_tags.py |
| test_02_deploy_vm_on_any_host_without_strict_tags | Error |
2.73 | test_vm_strict_host_tags.py |
| test_03_deploy_vm_on_specific_host_with_strict_tags_success | Error |
0.27 | test_vm_strict_host_tags.py |
| test_04_deploy_vm_on_any_host_with_strict_tags_success | Error |
5.91 | test_vm_strict_host_tags.py |
| test_05_deploy_vm_on_specific_host_with_strict_tags_failure | Failure |
0.30 | test_vm_strict_host_tags.py |
| ContextSuite context=TestIsolatedNetworks>:setup | Error |
0.00 | test_routers_network_ops.py |
| ContextSuite context=TestRedundantIsolateNetworks>:setup | Error |
0.00 | test_routers_network_ops.py |
| ContextSuite context=TestRouterServices>:setup | Error |
0.00 | test_routers.py |
| test_01_sys_vm_start | Failure |
0.10 | test_secondary_storage.py |
| ContextSuite context=TestCpuCapServiceOfferings>:setup | Error |
0.00 | test_service_offerings.py |
| ContextSuite context=TestServiceOfferings>:setup | Error |
0.32 | test_service_offerings.py |
| ContextSuite context=TestSetSourceNatIp>:setup | Error |
0.00 | test_set_sourcenat.py |
| ContextSuite context=TestSharedFSLifecycle>:setup | Error |
0.00 | test_sharedfs_lifecycle.py |
| ContextSuite context=TestSnapshotRootDisk>:setup | Error |
0.00 | test_snapshots.py |
| ContextSuite context=TestSnapshotStandaloneBackup>:setup | Error |
0.00 | test_snapshots.py |
| test_01_list_sec_storage_vm | Failure |
0.05 | test_ssvm.py |
| test_02_list_cpvm_vm | Failure |
0.04 | test_ssvm.py |
| test_03_ssvm_internals | Failure |
0.04 | test_ssvm.py |
| test_04_cpvm_internals | Failure |
0.04 | test_ssvm.py |
| test_05_stop_ssvm | Failure |
0.04 | test_ssvm.py |
| test_06_stop_cpvm | Failure |
0.04 | test_ssvm.py |
| test_07_reboot_ssvm | Failure |
0.04 | test_ssvm.py |
| test_08_reboot_cpvm | Failure |
0.04 | test_ssvm.py |
| test_09_reboot_ssvm_forced | Failure |
0.04 | test_ssvm.py |
| test_10_reboot_cpvm_forced | Failure |
0.04 | test_ssvm.py |
| test_11_destroy_ssvm | Failure |
0.04 | test_ssvm.py |
| test_12_destroy_cpvm | Failure |
0.04 | test_ssvm.py |
| ContextSuite context=TestVMWareStoragePolicies>:setup | Error |
0.00 | test_storage_policy.py |
| test_02_create_template_with_checksum_sha1 | Error |
65.69 | test_templates.py |
| test_03_create_template_with_checksum_sha256 | Error |
65.70 | test_templates.py |
| test_04_create_template_with_checksum_md5 | Error |
65.68 | test_templates.py |
| test_05_create_template_with_no_checksum | Error |
65.69 | test_templates.py |
| test_01_register_template_direct_download_flag | Error |
0.07 | test_templates.py |
| test_02_deploy_vm_from_direct_download_template | Error |
0.00 | test_templates.py |
| test_03_deploy_vm_wrong_checksum | Error |
0.06 | test_templates.py |
| ContextSuite context=TestTemplates>:setup | Error |
16.21 | test_templates.py |
| ContextSuite context=TestISOUsage>:setup | Error |
0.00 | test_usage.py |
| ContextSuite context=TestLBRuleUsage>:setup | Error |
0.00 | test_usage.py |
| ContextSuite context=TestNatRuleUsage>:setup | Error |
0.00 | test_usage.py |
| ContextSuite context=TestPublicIPUsage>:setup | Error |
0.00 | test_usage.py |
| ContextSuite context=TestSnapshotUsage>:setup | Error |
0.00 | test_usage.py |
| ContextSuite context=TestVmUsage>:setup | Error |
0.00 | test_usage.py |
| ContextSuite context=TestVolumeUsage>:setup | Error |
0.00 | test_usage.py |
| ContextSuite context=TestVpnUsage>:setup | Error |
0.00 | test_usage.py |
| test_01_scale_up_verify | Failure |
35.06 | test_vm_autoscaling.py |
| test_02_update_vmprofile_and_vmgroup | Failure |
245.82 | test_vm_autoscaling.py |
| test_03_scale_down_verify | Failure |
304.63 | test_vm_autoscaling.py |
| test_04_stop_remove_vm_in_vmgroup | Failure |
0.03 | test_vm_autoscaling.py |
| test_06_autoscaling_vmgroup_on_project_network | Failure |
43.63 | test_vm_autoscaling.py |
| test_06_autoscaling_vmgroup_on_project_network | Error |
43.63 | test_vm_autoscaling.py |
| test_07_autoscaling_vmgroup_on_vpc_network | Error |
1.24 | test_vm_autoscaling.py |
| ContextSuite context=TestVmAutoScaling>:teardown | Error |
10.44 | test_vm_autoscaling.py |
| test_01_deploy_vm_on_specific_host | Error |
0.10 | test_vm_deployment_planner.py |
| test_02_deploy_vm_on_specific_cluster | Error |
1.44 | test_vm_deployment_planner.py |
| test_03_deploy_vm_on_specific_pod | Error |
1.35 | test_vm_deployment_planner.py |
| test_04_deploy_vm_on_host_override_pod_and_cluster | Error |
0.14 | test_vm_deployment_planner.py |
| test_05_deploy_vm_on_cluster_override_pod | Error |
1.38 | test_vm_deployment_planner.py |
| test_01_migrate_VM_and_root_volume | Error |
100.06 | test_vm_life_cycle.py |
| test_02_migrate_VM_with_two_data_disks | Error |
56.19 | test_vm_life_cycle.py |
| test_01_secure_vm_migration | Error |
88.53 | test_vm_life_cycle.py |
| test_02_unsecure_vm_migration | Error |
227.35 | test_vm_life_cycle.py |
| test_04_nonsecured_to_secured_vm_migration | Error |
155.34 | test_vm_life_cycle.py |
| test_08_migrate_vm | Error |
0.07 | test_vm_life_cycle.py |
@DaanHoogland, thanks for running the integration tests!
I've taken a quick look at the errors and the Management Server logs. It seems that they are related to environment issues:
2025-06-15 01:57:28,101 DEBUG [c.c.d.FirstFitPlanner] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668, ctx-ba951d3c]) (logid:b7fc88c8) Searching all possible resources under this Zone: Zone {"id": "1", "name": "pr10964-t13522-kvm-ol8", "uuid": "cffaeffc-2a1d-4a2a-8ed6-27b88c41bbaf"}
2025-06-15 01:57:28,102 DEBUG [c.c.d.FirstFitPlanner] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668, ctx-ba951d3c]) (logid:b7fc88c8) Listing clusters in order of aggregate capacity, that have (at least one host with) enough CPU and RAM capacity under this Zone: 1
2025-06-15 01:57:28,106 DEBUG [c.c.d.FirstFitPlanner] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668, ctx-ba951d3c]) (logid:b7fc88c8) Removing from the clusterId list these clusters from avoid set: [1]
2025-06-15 01:57:28,107 DEBUG [c.c.d.FirstFitPlanner] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668, ctx-ba951d3c]) (logid:b7fc88c8) No clusters found after removing disabled clusters and clusters in avoid list, returning.
2025-06-15 01:57:28,131 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668, ctx-ba951d3c]) (logid:b7fc88c8) VM instance {"id":242,"instanceName":"i-283-242-VM","state":"Stopped","type":"User","uuid":"8c2148d3-1e96-4d2a-b78d-633911a13e98"} state transited from [Starting] to [Stopped] with event [OperationFailed]. VM's original host: null, new host: null, host before state transition: null
2025-06-15 01:57:28,158 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668, ctx-ba951d3c]) (logid:b7fc88c8) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM instance {"id":242,"instanceName":"i-283-242-VM","state":"Starting","type":"User","uuid":"8c2148d3-1e96-4d2a-b78d-633911a13e98"}Scope=interface com.cloud.dc.DataCenter; id=1
2025-06-15 01:57:28,158 INFO [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668, ctx-ba951d3c]) (logid:b7fc88c8) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM instance {"id":242,"instanceName":"i-283-242-VM","state":"Starting","type":"User","uuid":"8c2148d3-1e96-4d2a-b78d-633911a13e98"}Scope=interface com.cloud.dc.DataCenter; id=1
2025-06-15 01:57:28,158 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668]) (logid:b7fc88c8) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 242, job origin: 2667
2025-06-15 01:57:28,158 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-7:[ctx-e8c695d7, job-2667/job-2668]) (logid:b7fc88c8) Unable to complete AsyncJob {"accountId":2,"cmd":"com.cloud.vm.VmWorkStart","cmdInfo":"rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAAA8nQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAHBwcHBwcHBwcHA","cmdVersion":0,"completeMsid":null,"created":"Sun Jun 15 01:57:27 UTC 2025","id":2668,"initMsid":32989224371150,"instanceId":null,"instanceType":null,"lastPolled":null,"lastUpdated":null,"processStatus":0,"removed":null,"result":null,"resultCode":0,"status":"IN_PROGRESS","userId":2,"uuid":"90bb3d64-5c02-4776-9903-a603297a576d"}, job origin: 2667 com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM instance {"id":242,"instanceName":"i-283-242-VM","state":"Starting","type":"User","uuid":"8c2148d3-1e96-4d2a-b78d-633911a13e98"}Scope=interface com.cloud.dc.DataCenter; id=1
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:1275)
at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:5582)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:102)
at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5706)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:99)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:689)
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:637)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Could we rerun the tests?
Could we rerun the tests?
Let’s first try the healtcheck PR.
@blueorangutan package
@JoaoJandre a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.
Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 14046
@blueorangutan test
@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests
[SF] Trillian test result (tid-13693) Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8 Total time taken: 53659 seconds Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr10964-t13693-kvm-ol8.zip Smoke tests completed. 140 look OK, 1 have errors, 0 did not run Only failed and skipped tests results shown below:
| Test | Result | Time (s) | Test File |
|---|---|---|---|
| test_isolate_network_password_server | Failure |
11.08 | test_password_server.py |
This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.
@blueorangutan package
@bernardodemarco a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.