ovirt-ansible-collection icon indicating copy to clipboard operation
ovirt-ansible-collection copied to clipboard

Vm.__init__() got an unexpected keyword argument 'virtio_scsi_multi_queues'

Open claremont-awilson opened this issue 10 months ago • 3 comments

SUMMARY

Just getting started with the module, get error when trying to deploy a VM.

COMPONENT NAME

ovirt.ovirt.ovirt_vm

STEPS TO REPRODUCE
---
- hosts: localhost

  tasks:
    - name: Obtain SSO token with using username/password creds
      ovirt.ovirt.ovirt_auth:
        url: https://myhost.mydomain.co.uk/ovirt-engine/api
        username: myUser@internal
        password: "myPassword!"
        insecure: true

    - ovirt.ovirt.ovirt_vm:
        auth: "{{ ovirt_auth }}"
        state: present
        cluster: myCluster
        name: myVm
        memory: 2GiB
        cpu_cores: 2
        cpu_sockets: 2
        cpu_shares: 1024
        type: server
        operating_system: rhel_8x64
        disks:
          - name: OEL8_7_base_OS
            bootable: True
        nics:
          - name: nic1
EXPECTED RESULTS

To deploy a VM.

ACTUAL RESULTS
ansible-playbook [core 2.14.2]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.11/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /bin/ansible-playbook
  python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4.3.0.1)] (/usr/bin/python3.11)
  jinja version = 3.1.2
  libyaml = True
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: deploy_vm8.yml ***********************************************************************************************
1 plays in repos/playbooks/deploy_vm8.yml

PLAY [localhost] *******************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
task path: /home/myUser/repos/playbooks/deploy_vm8.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1691766318.6952455-13571-88994779928357 `" && echo ansible-tmp-1691766318.6952455-13571-88994779928357="` echo /root/.ansible/tmp/ansible-tmp-1691766318.6952455-13571-88994779928357 `" ) && sleep 0'
Using module file /usr/lib/python3.11/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-135674g3g06k3/tmp93ttwb7v TO /root/.ansible/tmp/ansible-tmp-1691766318.6952455-13571-88994779928357/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1691766318.6952455-13571-88994779928357/ /root/.ansible/tmp/ansible-tmp-1691766318.6952455-13571-88994779928357/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3.11 /root/.ansible/tmp/ansible-tmp-1691766318.6952455-13571-88994779928357/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1691766318.6952455-13571-88994779928357/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]

TASK [Obtain SSO token with using username/password creds] *************************************************************
task path: /home/myUser/repos/playbooks/deploy_vm8.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1691766319.3993425-13619-259090064851812 `" && echo ansible-tmp-1691766319.3993425-13619-259090064851812="` echo /root/.ansible/tmp/ansible-tmp-1691766319.3993425-13619-259090064851812 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_auth.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-135674g3g06k3/tmphp4m5rtq TO /root/.ansible/tmp/ansible-tmp-1691766319.3993425-13619-259090064851812/AnsiballZ_ovirt_auth.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1691766319.3993425-13619-259090064851812/ /root/.ansible/tmp/ansible-tmp-1691766319.3993425-13619-259090064851812/AnsiballZ_ovirt_auth.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3.11 /root/.ansible/tmp/ansible-tmp-1691766319.3993425-13619-259090064851812/AnsiballZ_ovirt_auth.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1691766319.3993425-13619-259090064851812/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "ansible_facts": {
        "ovirt_auth": {
            "ca_file": null,
            "compress": true,
            "headers": null,
            "insecure": true,
            "kerberos": false,
            "timeout": 0,
            "token": "cgxlazFl1tqkdrMV-MI8OhtDpXRuF5YaaVDpZ7KTfDAcHZrnkpQw4qn7EvwzoVZgcKiCQhPu2Tp85sPAtq3sQQ",
            "url": "https://myhost.mydomain.co.uk/ovirt-engine/api"
        }
    },
    "changed": false,
    "invocation": {
        "module_args": {
            "ca_file": null,
            "compress": true,
            "headers": null,
            "hostname": null,
            "insecure": true,
            "kerberos": false,
            "ovirt_auth": null,
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "state": "present",
            "timeout": 0,
            "token": null,
            "url": "https://myhost.mydomain.co.uk/ovirt-engine/api",
            "username": "myUser@internal"
        }
    }
}

TASK [ovirt.ovirt.ovirt_vm] ********************************************************************************************
task path: /home/myUser/repos/playbooks/deploy_vm8.yml:14
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1691766319.9438531-13639-106177033945086 `" && echo ansible-tmp-1691766319.9438531-13639-106177033945086="` echo /root/.ansible/tmp/ansible-tmp-1691766319.9438531-13639-106177033945086 `" ) && sleep 0'
Using module file /root/.ansible/collections/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-135674g3g06k3/tmp9d46078g TO /root/.ansible/tmp/ansible-tmp-1691766319.9438531-13639-106177033945086/AnsiballZ_ovirt_vm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1691766319.9438531-13639-106177033945086/ /root/.ansible/tmp/ansible-tmp-1691766319.9438531-13639-106177033945086/AnsiballZ_ovirt_vm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3.11 /root/.ansible/tmp/ansible-tmp-1691766319.9438531-13639-106177033945086/AnsiballZ_ovirt_vm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1691766319.9438531-13639-106177033945086/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_ovirt.ovirt.ovirt_vm_payload_72w6opb8/ansible_ovirt.ovirt.ovirt_vm_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm.py", line 2694, in main
  File "/tmp/ansible_ovirt.ovirt.ovirt_vm_payload_72w6opb8/ansible_ovirt.ovirt.ovirt_vm_payload.zip/ansible_collections/ovirt/ovirt/plugins/module_utils/ovirt.py", line 673, in create
    self.build_entity(),
    ^^^^^^^^^^^^^^^^^^^
  File "/tmp/ansible_ovirt.ovirt.ovirt_vm_payload_72w6opb8/ansible_ovirt.ovirt.ovirt_vm_payload.zip/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_vm.py", line 1533, in build_entity
TypeError: Vm.__init__() got an unexpected keyword argument 'virtio_scsi_multi_queues'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "affinity_group_mappings": [],
            "affinity_label_mappings": [],
            "allow_partial_import": null,
            "ballooning_enabled": null,
            "bios_type": null,
            "boot_devices": null,
            "boot_menu": null,
            "cd_iso": null,
            "clone": false,
            "clone_permissions": false,
            "cloud_init": null,
            "cloud_init_nics": [],
            "cloud_init_persist": false,
            "cluster": "Claremont_Cloud_Heathrow",
            "cluster_mappings": [],
            "comment": null,
            "cpu_cores": 2,
            "cpu_mode": null,
            "cpu_pinning": null,
            "cpu_shares": 1024,
            "cpu_sockets": 2,
            "cpu_threads": null,
            "custom_compatibility_version": null,
            "custom_emulated_machine": null,
            "custom_properties": null,
            "delete_protected": null,
            "description": null,
            "disk_format": "cow",
            "disks": [
                {
                    "bootable": true,
                    "name": "OEL8_7_base_OS"
                }
            ],
            "domain_mappings": [],
            "exclusive": null,
            "export_domain": null,
            "export_ova": null,
            "fetch_nested": false,
            "force": false,
            "force_migrate": null,
            "graphical_console": null,
            "high_availability": null,
            "high_availability_priority": null,
            "host": null,
            "host_devices": null,
            "id": null,
            "initrd_path": null,
            "instance_type": null,
            "io_threads": null,
            "kernel_params": null,
            "kernel_params_persist": false,
            "kernel_path": null,
            "kvm": null,
            "lease": null,
            "lun_mappings": [],
            "memory": "2GiB",
            "memory_guaranteed": null,
            "memory_max": null,
            "migrate": null,
            "multi_queues_enabled": null,
            "name": "allytestvm",
            "nested_attributes": [],
            "next_run": null,
            "nics": [
                {
                    "name": "nic1"
                }
            ],
            "numa_nodes": [],
            "numa_tune_mode": null,
            "operating_system": "rhel_8x64",
            "placement_policy": null,
            "placement_policy_hosts": null,
            "poll_interval": 3,
            "quota_id": null,
            "reassign_bad_macs": null,
            "rng_device": null,
            "role_mappings": [],
            "serial_console": null,
            "serial_policy": null,
            "serial_policy_value": null,
            "smartcard_enabled": null,
            "snapshot_name": null,
            "snapshot_vm": null,
            "soundcard_enabled": null,
            "sso": null,
            "state": "present",
            "stateless": null,
            "storage_domain": null,
            "sysprep": null,
            "template": null,
            "template_version": null,
            "ticket": null,
            "timeout": 180,
            "timezone": null,
            "type": "server",
            "usb_support": null,
            "use_latest_template_version": null,
            "virtio_scsi_enabled": null,
            "virtio_scsi_multi_queues": null,
            "vmware": null,
            "vnic_profile_mappings": [],
            "volatile": null,
            "wait": true,
            "wait_after_lease": 5,
            "watchdog": null,
            "xen": null
        }
    },
    "msg": "Vm.__init__() got an unexpected keyword argument 'virtio_scsi_multi_queues'"
}

PLAY RECAP *************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Additional notes: ansible [core 2.14.2] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.11/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /bin/ansible python version = 3.11.2 (main, May 24 2023, 00:00:00) [GCC 11.3.1 20221121 (Red Hat 11.3.1-4.3.0.1)] (/usr/bin/python3.11) jinja version = 3.1.2 libyaml = True oVirt SDk: 4.4.10 oVirt Management Host: 4.4.10 Ansible Control host: OEL 9.2

claremont-awilson avatar Aug 11 '23 15:08 claremont-awilson

Most probably caused by old ovirt-engine-sdk. Check the requirements. 4.5+ is required.

jirimacku avatar Aug 23 '23 11:08 jirimacku

The requirement is for ovirt-engine-sdk 4.5+ to connect to oVirt Management Host 4.4.10?

claremont-awilson avatar Aug 23 '23 11:08 claremont-awilson

I'm not sure in which exact version of the SDK added the parameter virtio_scsi_multi_queues, but I can see it in 4.4.15. Please try to upgrade the SDK and let me know if you still have a problem. If you are using the latest collection recommend using the latest sdk, because there are some features needed from the SDK.

mnecas avatar Aug 24 '23 08:08 mnecas