community.vmware icon indicating copy to clipboard operation
community.vmware copied to clipboard

Cloning template to create VM fails with "msg": "Failed to create a virtual machine : The name 'test0' already exists."}

Open srivaa31 opened this issue 5 years ago • 12 comments

SUMMARY

Deploying a VM from a cent OS 7/8 template fails with VM already exists error message. The datastore where the VM template is present is a shared datastore and is created through one of the corporate internal deployment and orchestration software. Noticed that there was a similar issue in the past . reference here : #28250 I tried the work around mentioned in comments by different folks. none worked

ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest

ANSIBLE VERSION
ansible 2.9.6
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
CONFIGURATION

OS / ENVIRONMENT

CentOS 8 vSphere Client version 6.7.0.42000

STEPS TO REPRODUCE
---

- name: Clone the template
  vmware_guest:
    hostname: "{{ vmware_server }}"
    username: "{{ vmware_user }}"
    password: "{{ vmware_pass }}"
    validate_certs: False
    name: "{{ item.name }}"
    template: "{{ item.template }}"
    datacenter: "{{ source_datacenter_name }}"
    datastore: "{{ source_datastore }}"
    folder: "{{ source_datacenter_name }}/vm"
    #folder: /
    state: present
    #state: poweredon
    esxi_hostname: "{{ item.esxi_host }}"
    wait_for_ip_address: yes
  with_items: "{{ vms }}"
  register: result

EXPECTED RESULTS

Vcenter should display in recent task that target VM is created successfully and VM is powered on and assigned an IP through DHCP in my case for further action.

ACTUAL RESULTS

I get the error message (red bang) in vcenter that vm with the name already exists, though it creates the VM successfully and is functional .This impacts subsequent test cases for execution in my playbook.(Although that can be avoided , but it doesn't gives the clean impression) Upon manually creating the VM with a new name ,the above error message is not observed ,ruling out the possibility of open issues with vmware in my opinion.


srivaa31 avatar Apr 27 '20 14:04 srivaa31

**One of the delete vm ansible script that fails with below stack trace in jenkins --

it actually deletes the VM in vcenter but gives this error on console--**

TASK [delete_vms : Delete VMs] *************************************************
failed: [192.168.105.238 -> localhost] (item={'test_vm0': None, 'name': 'test_vm0', 'template': 'CentOS-fio-template', 'esxi_host': 'vxflex-node-11.rack.lab'}) => {"ansible_loop_var": "item", "changed": false, "item": {"esxi_host": "vxflex-node-11.rack.lab", "name": "test_vm0", "template": "CentOS-fio-template", "test_vm0": null}, "module_stderr": "pyVmomi.VmomiSupport.InvalidPowerState: (vim.fault.InvalidPowerState) {\n   dynamicType = <unset>,\n   dynamicProperty = (vmodl.DynamicProperty) [],\n   msg = 'The attempted operation cannot be performed in the current state (Powered off).',\n   faultCause = <unset>,\n   faultMessage = (vmodl.LocalizableMessage) [],\n   requestedState = 'poweredOn',\n   existingState = 'poweredOff'\n}\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1587996233.7801487-78537100729638/AnsiballZ_vmware_guest.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1587996233.7801487-78537100729638/AnsiballZ_vmware_guest.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1587996233.7801487-78537100729638/AnsiballZ_vmware_guest.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_guest', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_vmware_guest_payload_x3coaqre/ansible_vmware_guest_payload.zip/ansible/modules/cloud/vmware/vmware_guest.py\", line 2834, in <module>\n  File \"/tmp/ansible_vmware_guest_payload_x3coaqre/ansible_vmware_guest_payload.zip/ansible/modules/cloud/vmware/vmware_guest.py\", line 2776, in main\n  File \"/tmp/ansible_vmware_guest_payload_x3coaqre/ansible_vmware_guest_payload.zip/ansible/module_utils/vmware.py\", line 797, in set_vm_power_state\n  File \"/tmp/ansible_vmware_guest_payload_x3coaqre/ansible_vmware_guest_payload.zip/ansible/module_utils/vmware.py\", line 82, in wait_for_task\n  File \"<string>\", line 3, in raise_from\nansible.module_utils.vmware.TaskError: ('The attempted operation cannot be performed in the current state (Powered off).', None)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
changed: [127.0.0.1 -> localhost] => (item={'test_vm0': None, 'name': 'test_vm0', 'template': 'CentOS-fio-template', 'esxi_host': 'vxflex-node-11.rack.lab'})
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: }
failed: [192.168.105.238 -> localhost] (item={'test_vm1': None, 'name': 'test_vm1', 'template': 'CentOS-fio-template', 'esxi_host': 'vxflex-node-11.rack.lab'}) => {"ansible_loop_var": "item", "changed": false, "item": {"esxi_host": "vxflex-node-11.rack.lab", "name": "test_vm1", "template": "CentOS-fio-template", "test_vm1": null}, "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1587996236.9711642-129284799168276/AnsiballZ_vmware_guest.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1587996236.9711642-129284799168276/AnsiballZ_vmware_guest.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1587996236.9711642-129284799168276/AnsiballZ_vmware_guest.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_guest', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_vmware_guest_payload_qvayh3jd/ansible_vmware_guest_payload.zip/ansible/modules/cloud/vmware/vmware_guest.py\", line 2834, in <module>\n  File \"/tmp/ansible_vmware_guest_payload_qvayh3jd/ansible_vmware_guest_payload.zip/ansible/modules/cloud/vmware/vmware_guest.py\", line 2776, in main\n  File \"/tmp/ansible_vmware_guest_payload_qvayh3jd/ansible_vmware_guest_payload.zip/ansible/module_utils/vmware.py\", line 805, in set_vm_power_state\n  File \"/tmp/ansible_vmware_guest_payload_qvayh3jd/ansible_vmware_guest_payload.zip/ansible/module_utils/vmware.py\", line 365, in gather_vm_facts\n  File \"/tmp/ansible_vmware_guest_payload_qvayh3jd/ansible_vmware_guest_payload.zip/ansible/module_utils/vmware.py\", line 269, in _get_vm_prop\n  File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 700, in __call__\n    return self.f(*args, **kwargs)\n  File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 520, in _InvokeAccessor\n    return self._stub.InvokeAccessor(self, info)\n  File \"/usr/local/lib/python3.6/site-packages/pyVmomi/StubAdapterAccessorImpl.py\", line 42, in InvokeAccessor\n    options=self._pcType.RetrieveOptions(maxObjects=1))\n  File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 706, in <lambda>\n    self.f(*(self.args + (obj,) + args), **kwargs)\n  File \"/usr/local/lib/python3.6/site-packages/pyVmomi/VmomiSupport.py\", line 512, in _InvokeMethod\n    return self._stub.InvokeMethod(self, info, args)\n  File \"/usr/local/lib/python3.6/site-packages/pyVmomi/SoapAdapter.py\", line 1397, in InvokeMethod\n    raise obj # pylint: disable-msg=E0702\npyVmomi.VmomiSupport.ManagedObjectNotFound: (vmodl.fault.ManagedObjectNotFound) {\n   dynamicType = <unset>,\n   dynamicProperty = (vmodl.DynamicProperty) [],\n   msg = \"The object 'vim.VirtualMachine:vm-212' has already been deleted or has not been completely created\",\n   faultCause = <unset>,\n   faultMessage = (vmodl.LocalizableMessage) [],\n   obj = 'vim.VirtualMachine:vm-212'\n}\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
changed: [127.0.0.1 -> localhost] => (item={'test_vm1': None, 'name': 'test_vm1', 'template': 'CentOS-fio-template', 'esxi_host': 'vxflex-node-11.rack.lab'})
failed: [127.0.0.1 -> localhost] (item={'test_vm2': None, 'name': 'test_vm2', 'template': 'CentOS-fio-template', 'esxi_host': 'vxflex-node-12.rack.lab'}) => {"ansible_loop_var": "item", "changed": false, "item": {"esxi_host": "vxflex-node-12.rack.lab", "name": "test_vm2", "template": "CentOS-fio-template", "test_vm2": null}, "module_stderr": "pyVmomi.VmomiSupport.InvalidPowerState: (vim.fault.InvalidPowerState) {\n   dynamicType = <unset>,\n   dynamicProperty = (vmodl.DynamicProperty) [],\n   msg = 'The attempted operation cannot be performed in the current state (Powered off).',\n   faultCause = <unset>,\n   faultMessage = (vmodl.LocalizableMessage) [],\n   requestedState = 'poweredOn',\n   existingState = 'poweredOff'\n}\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1587996241.2424653-102156280041174/AnsiballZ_vmware_guest.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1587996241.2424653-102156280041174/AnsiballZ_vmware_guest.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1587996241.2424653-102156280041174/AnsiballZ_vmware_guest.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.vmware.vmware_guest', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_vmware_guest_payload_x2mb4im0/ansible_vmware_guest_payload.zip/ansible/modules/cloud/vmware/vmware_guest.py\", line 2834, in <module>\n  File \"/tmp/ansible_vmware_guest_payload_x2mb4im0/ansible_vmware_guest_payload.zip/ansible/modules/cloud/vmware/vmware_guest.py\", line 2776, in main\n  File \"/tmp/ansible_vmware_guest_payload_x2mb4im0/ansible_vmware_guest_payload.zip/ansible/module_utils/vmware.py\", line 797, in set_vm_power_state\n  File \"/tmp/ansible_vmware_guest_payload_x2mb4im0/ansible_vmware_guest_payload.zip/ansible/module_utils/vmware.py\", line 82, in wait_for_task\n  File \"<string>\", line 3, in raise_from\nansible.module_utils.vmware.TaskError: ('The attempted operation cannot be performed in the current state (Powered off).', None)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
changed: [192.168.105.238 -> localhost] => (item={'test_vm2': None, 'name': 'test_vm2', 'template': 'CentOS-fio-template', 'esxi_host': 'vxflex-node-12.rack.lab'})

srivaa31 avatar Apr 27 '20 14:04 srivaa31

cc @Akasurde @Tomorrow9 @goneri @lparkes @nerzhul @pdellaert @pgbidkar @warthog9 click here for bot help

ansibullbot avatar Aug 19 '20 23:08 ansibullbot

I have same issue. Can you recommend a workaround? It works if I run single playbook, but got error msg when my task is inside a role.

"msg": "Failed to create a virtual machine : The name '********' already exists.",

gyzszabo avatar Dec 04 '20 18:12 gyzszabo

Same issue, it work correctly with vmware module in ansible 2.9.14, but failed with community module on ansible 2.9.14 and ansible 3.0.0

yabb85 avatar Feb 19 '21 10:02 yabb85

I just hit this issue as well, using ansible 2.10.6, python 3.9.1, community.vmware 1.8.0, vSphere 6.7. In my case, the error happens even when I manually "delete from disk" the machine in question before running my ansible playbook. Also I can confirm using vsphere search that no resource with that name exists before running the playbook.

Each time the err msg presents itself, which is every time I run this playbook on this machine now, it appears that the vmware_guest task actually succeeds, recreating the machine remotely. Here's the relevant vmware_guest portion of the jenkins console output:

16:19:34  TASK [Create Engineering Services Template in vSphere from Debian 10 Template, Power On for further Provisioning] ***
16:19:36  fatal: [localhost]: FAILED! => changed=false 
16:19:36    msg: 'Failed to create a virtual machine : The name ''engineering-services-template'' already exists.'
16:20:22  changed: [engineering-services-template]

You can see that this task took 50 seconds or so, which is about how long it normally takes to clone the template.

timblaktu avatar Mar 29 '21 23:03 timblaktu

This continues to happen in my aforementioned environment. Below is the more verbose (-vvv) console output from the vmware_guest task which is inexplicably "failing" to create a new vm from a template, after having explicitly manually "delete from disk" the new machine name in vSphere. As I mentioned, it's actually succeeding, but the vmware_guest task is reporting an error and moving on.

@Akasurde @goneri The most interesting thing in this output I see is that the variable I'm registering in the vmware_guest task, new_vm, shows a different value in the verbose output from the task (where it indicates changed: false failed: true), compared to the subsequent debug task that prints the same variable out (where it indicates changed: true, failed: false).

To work around this issue, I have to add a subsequent fail task to check if registered variable new_vm.failed. I tried to redefine failure for the task using failed_when, but when this error happens, the registered variable doesn't seem to have anything in it that I can check, e.g. the standard msg and stdout members don't exist. :-(

09:25:09  TASK [Create Engineering Services Template in vSphere from Debian 10 Template, Power On for further Provisioning] ***
09:25:09  task path: /home/jenkins/.jenkins/workspace/ring-services-vsphere-template_4/ansible/projects/vsphere-engineering-services-template/engineering-services-template.yml:19
09:25:09  Wednesday 07 April 2021  09:25:24 -0700 (0:00:00.021)       0:00:00.021 ******* 
09:25:09  Wednesday 07 April 2021  09:25:24 -0700 (0:00:00.020)       0:00:00.020 ******* 
09:25:09  <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
09:25:09  <127.0.0.1> EXEC /bin/sh -c 'echo ~jenkins && sleep 0'
09:25:09  <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/jenkins/.ansible/tmp `"&& mkdir "` echo /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3378916-21091-205596001496630 `" && echo ansible-tmp-1617812724.3378916-21091-205596001496630="` echo /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3378916-21091-205596001496630 `" ) && sleep 0'
09:25:09  <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: jenkins
09:25:09  <127.0.0.1> EXEC /bin/sh -c 'echo ~jenkins && sleep 0'
09:25:09  <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/jenkins/.ansible/tmp `"&& mkdir "` echo /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3469033-21092-246885996644537 `" && echo ansible-tmp-1617812724.3469033-21092-246885996644537="` echo /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3469033-21092-246885996644537 `" ) && sleep 0'
09:25:10  Using module file /home/jenkins/.ansible/collections/Ansible-vsphere-e-feature-SWOPS-832/ansible_collections/community/vmware/plugins/modules/vmware_guest.py
09:25:10  <127.0.0.1> PUT /home/jenkins/.ansible/tmp/ansible-local-21079i__lbfay/tmp8o2_26it TO /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3378916-21091-205596001496630/AnsiballZ_vmware_guest.py
09:25:10  Using module file /home/jenkins/.ansible/collections/Ansible-vsphere-e-feature-SWOPS-832/ansible_collections/community/vmware/plugins/modules/vmware_guest.py
09:25:10  <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3378916-21091-205596001496630/ /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3378916-21091-205596001496630/AnsiballZ_vmware_guest.py && sleep 0'
09:25:10  <127.0.0.1> PUT /home/jenkins/.ansible/tmp/ansible-local-21079i__lbfay/tmpp6ii4f9r TO /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3469033-21092-246885996644537/AnsiballZ_vmware_guest.py
09:25:10  <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3469033-21092-246885996644537/ /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3469033-21092-246885996644537/AnsiballZ_vmware_guest.py && sleep 0'
09:25:10  <127.0.0.1> EXEC /bin/sh -c 'VMWARE_PASSWORD=xxx VMWARE_USER=xxx VMWARE_HOST=xxx VMWARE_VALIDATE_CERTS=False /home/jenkins/.pyenv/versions/Ansible-vsphere-e-feature-SWOPS-832/bin/python /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3378916-21091-205596001496630/AnsiballZ_vmware_guest.py && sleep 0'
09:25:10  <127.0.0.1> EXEC /bin/sh -c 'VMWARE_PASSWORD=xxx VMWARE_USER=xxx VMWARE_HOST=xxx VMWARE_VALIDATE_CERTS=False /home/jenkins/.pyenv/versions/Ansible-vsphere-e-feature-SWOPS-832/bin/python /home/jenkins/.ansible/tmp/ansible-tmp-1617812724.3469033-21092-246885996644537/AnsiballZ_vmware_guest.py && sleep 0'
09:25:11  fatal: [localhost]: FAILED! => changed=false 
09:25:11    invocation:
09:25:11      module_args:
09:25:11        advanced_settings: []
09:25:11        annotation: This is the Template that all Engineering Services are based on.
09:25:11        cdrom: []
09:25:11        cluster: R740
09:25:11        convert: null
09:25:11        customization:
09:25:11          autologon: null
09:25:11          autologoncount: null
09:25:11          dns_servers: null
09:25:11          dns_suffix: null
09:25:11          domain: null
09:25:11          domainadmin: null
09:25:11          domainadminpassword: null
09:25:11          existing_vm: null
09:25:11          fullname: null
09:25:11          hostname: null
09:25:11          hwclockUTC: null
09:25:11          joindomain: null
09:25:11          joinworkgroup: null
09:25:11          orgname: null
09:25:11          password: null
09:25:11          productid: null
09:25:11          runonce: null
09:25:11          timezone: null
09:25:11        customization_spec: null
09:25:11        customvalues: []
09:25:11        datacenter: Engineering
09:25:11        datastore: null
09:25:11        delete_from_inventory: false
09:25:11        disk: []
09:25:11        esxi_hostname: null
09:25:11        folder: JenkinsCICD
09:25:11        force: false
09:25:11        guest_id: null
09:25:11        hardware:
09:25:11          boot_firmware: null
09:25:11          cpu_limit: null
09:25:11          cpu_reservation: null
09:25:11          hotadd_cpu: null
09:25:11          hotadd_memory: null
09:25:11          hotremove_cpu: null
09:25:11          max_connections: null
09:25:11          mem_limit: null
09:25:11          mem_reservation: null
09:25:11          memory_mb: null
09:25:11          memory_reservation_lock: null
09:25:11          nested_virt: null
09:25:11          num_cpu_cores_per_socket: null
09:25:11          num_cpus: null
09:25:11          scsi: null
09:25:11          version: null
09:25:11          virt_based_security: null
09:25:11        hostname: xxx
09:25:11        is_template: false
09:25:11        linked_clone: false
09:25:11        name: engineering-services-template
09:25:11        name_match: first
09:25:11        networks: []
09:25:11        password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
09:25:11        port: 443
09:25:11        proxy_host: null
09:25:11        proxy_port: null
09:25:11        resource_pool: null
09:25:11        snapshot_src: null
09:25:11        state: powered-on
09:25:11        state_change_timeout: 0
09:25:11        template: debian10
09:25:11        use_instance_uuid: false
09:25:11        username: xxx
09:25:11        uuid: null
09:25:11        validate_certs: false
09:25:11        vapp_properties: []
09:25:11        wait_for_customization: false
09:25:11        wait_for_customization_timeout: 3600
09:25:11        wait_for_ip_address: true
09:25:11        wait_for_ip_address_timeout: 300
09:25:11    msg: 'Failed to create a virtual machine : The name ''engineering-services-template'' already exists.'
09:25:43  changed: [engineering-services-template] => changed=true 
09:25:43    instance:
09:25:43      advanced_settings:
09:25:43        ethernet0.pciSlotNumber: '192'
09:25:43        guestinfo.vmtools.buildNumber: '17337674'
09:25:43        guestinfo.vmtools.description: open-vm-tools 11.2.5 build 17337674
09:25:43        guestinfo.vmtools.versionNumber: '11333'
09:25:43        guestinfo.vmtools.versionString: 11.2.5
09:25:43        hpet0.present: 'TRUE'
09:25:43        migrate.hostLog: engineering-services-template-0f5c40d8.hlog
09:25:43        migrate.hostLogState: none
09:25:43        migrate.migrationId: '0'
09:25:43        monitor.phys_bits_used: '43'
09:25:43        numa.autosize.cookie: '80001'
09:25:43        numa.autosize.vcpu.maxPerVirtualNode: '8'
09:25:43        nvram: engineering-services-template.nvram
09:25:43        pciBridge0.pciSlotNumber: '17'
09:25:43        pciBridge0.present: 'TRUE'
09:25:43        pciBridge4.functions: '8'
09:25:43        pciBridge4.pciSlotNumber: '21'
09:25:43        pciBridge4.present: 'TRUE'
09:25:43        pciBridge4.virtualDev: pcieRootPort
09:25:43        pciBridge5.functions: '8'
09:25:43        pciBridge5.pciSlotNumber: '22'
09:25:43        pciBridge5.present: 'TRUE'
09:25:43        pciBridge5.virtualDev: pcieRootPort
09:25:43        pciBridge6.functions: '8'
09:25:43        pciBridge6.pciSlotNumber: '23'
09:25:43        pciBridge6.present: 'TRUE'
09:25:43        pciBridge6.virtualDev: pcieRootPort
09:25:43        pciBridge7.functions: '8'
09:25:43        pciBridge7.pciSlotNumber: '24'
09:25:43        pciBridge7.present: 'TRUE'
09:25:43        pciBridge7.virtualDev: pcieRootPort
09:25:43        sched.cpu.latencySensitivity: normal
09:25:43        sched.mem.pin: 'TRUE'
09:25:43        sched.swap.derivedName: /vmfs/volumes/5a672d18-86093110-eef4-000af7be1760/engineering-services-template_2/engineering-services-template-1d6181ea.vswp
09:25:43        scsi0.pciSlotNumber: '160'
09:25:43        scsi0.sasWWID: 50 05 05 67 6f 78 ee 10
09:25:43        scsi0:0.redo: ''
09:25:43        softPowerOff: 'FALSE'
09:25:43        svga.guestBackedPrimaryAware: 'TRUE'
09:25:43        svga.present: 'TRUE'
09:25:43        tools.guest.desktop.autolock: 'FALSE'
09:25:43        toolsInstallManager.updateCounter: '1'
09:25:43        vmci0.pciSlotNumber: '32'
09:25:43        vmotion.checkpointFBSize: '4194304'
09:25:43        vmotion.checkpointSVGAPrimarySize: '4194304'
09:25:43        vmware.tools.internalversion: '11333'
09:25:43        vmware.tools.requiredversion: '11265'
09:25:43      annotation: This is the Template that all Engineering Services are based on.
09:25:43      current_snapshot: null
09:25:43      customvalues: {}
09:25:43      guest_consolidation_needed: false
09:25:43      guest_question: null
09:25:43      guest_tools_status: guestToolsRunning
09:25:43      guest_tools_version: '11333'
09:25:43      hw_cluster: R740
09:25:43      hw_cores_per_socket: 1
09:25:43      hw_datastores:
09:25:43      - bsienghost4:Local
09:25:43      hw_esxi_host: xxx
09:25:43      hw_eth0:
09:25:43        addresstype: assigned
09:25:43        ipaddresses: null
09:25:43        label: Network adapter 1
09:25:43        macaddress: 00:50:56:b7:69:5d
09:25:43        macaddress_dash: 00-50-56-b7-69-5d
09:25:43        portgroup_key: null
09:25:43        portgroup_portkey: null
09:25:43        summary: EngServer
09:25:43      hw_files:
09:25:43      - '[bsienghost4:Local] engineering-services-template_2/engineering-services-template.vmx'
09:25:43      - '[bsienghost4:Local] engineering-services-template_2/engineering-services-template.nvram'
09:25:43      - '[bsienghost4:Local] engineering-services-template_2/engineering-services-template.vmsd'
09:25:43      - '[bsienghost4:Local] engineering-services-template_2/engineering-services-template.vmdk'
09:25:43      hw_folder: /Engineering/vm/JenkinsCICD
09:25:43      hw_guest_full_name: ''
09:25:43      hw_guest_ha_state: true
09:25:43      hw_guest_id: null
09:25:43      hw_interfaces:
09:25:43      - eth0
09:25:43      hw_is_template: false
09:25:43      hw_memtotal_mb: 1024
09:25:43      hw_name: engineering-services-template
09:25:43      hw_power_status: poweredOn
09:25:43      hw_processor_count: 8
09:25:43      hw_product_uuid: 4237d717-6f78-ee1f-ea0c-5bd2c6014d3c
09:25:43      hw_version: vmx-14
09:25:43      instance_uuid: 5037effd-2eaa-34bd-0c3a-9e9fc273d189
09:25:43      ipv4: 172.16.22.23
09:25:43      ipv6: null
09:25:43      module_hw: true
09:25:43      moid: vm-608
09:25:43      snapshots: []
09:25:43      vimref: vim.VirtualMachine:vm-608
09:25:43      vnc: {}
09:25:43    invocation:
09:25:43      module_args:
09:25:43        advanced_settings: []
09:25:43        annotation: This is the Template that all Engineering Services are based on.
09:25:43        cdrom: []
09:25:43        cluster: R740
09:25:43        convert: null
09:25:43        customization:
09:25:43          autologon: null
09:25:43          autologoncount: null
09:25:43          dns_servers: null
09:25:43          dns_suffix: null
09:25:43          domain: null
09:25:43          domainadmin: null
09:25:43          domainadminpassword: null
09:25:43          existing_vm: null
09:25:43          fullname: null
09:25:43          hostname: null
09:25:43          hwclockUTC: null
09:25:43          joindomain: null
09:25:43          joinworkgroup: null
09:25:43          orgname: null
09:25:43          password: null
09:25:43          productid: null
09:25:43          runonce: null
09:25:43          timezone: null
09:25:43        customization_spec: null
09:25:43        customvalues: []
09:25:43        datacenter: Engineering
09:25:43        datastore: null
09:25:43        delete_from_inventory: false
09:25:43        disk: []
09:25:43        esxi_hostname: null
09:25:43        folder: JenkinsCICD
09:25:43        force: false
09:25:43        guest_id: null
09:25:43        hardware:
09:25:43          boot_firmware: null
09:25:43          cpu_limit: null
09:25:43          cpu_reservation: null
09:25:43          hotadd_cpu: null
09:25:43          hotadd_memory: null
09:25:43          hotremove_cpu: null
09:25:43          max_connections: null
09:25:43          mem_limit: null
09:25:43          mem_reservation: null
09:25:43          memory_mb: null
09:25:43          memory_reservation_lock: null
09:25:43          nested_virt: null
09:25:43          num_cpu_cores_per_socket: null
09:25:43          num_cpus: null
09:25:43          scsi: null
09:25:43          version: null
09:25:43          virt_based_security: null
09:25:43        hostname: xxx
09:25:43        is_template: false
09:25:43        linked_clone: false
09:25:43        name: engineering-services-template
09:25:43        name_match: first
09:25:43        networks: []
09:25:43        password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
09:25:43        port: 443
09:25:43        proxy_host: null
09:25:43        proxy_port: null
09:25:43        resource_pool: null
09:25:43        snapshot_src: null
09:25:43        state: powered-on
09:25:43        state_change_timeout: 0
09:25:43        template: debian10
09:25:43        use_instance_uuid: false
09:25:43        username: xxx
09:25:43        uuid: null
09:25:43        validate_certs: false
09:25:43        vapp_properties: []
09:25:43        wait_for_customization: false
09:25:43        wait_for_customization_timeout: 3600
09:25:43        wait_for_ip_address: true
09:25:43        wait_for_ip_address_timeout: 300

timblaktu avatar Apr 07 '21 16:04 timblaktu

I have prepended a task to force delete the machine I'm cloning before trying to create it from template, and am getting same misbehavior described by @srivaa31 in this comment. Essentially, it deletes the machine, but the task returns an error, and when run verbosely I see the following stack trace. Not sure why it thinks I've requests poweredOn state, because my task looks like this:

    - name: Force Delete Engineering Services Template in vSphere. We re-create this from scratch every time.
      tags: ["delete"]
      community.vmware.vmware_guest:
        hostname: "{{ vcenter_host }}"
        datacenter: "{{ datacenter }}"
        cluster: "{{ cluster }}"
        folder: "{{ folder }}"
        name: "{{ new_machine_name }}"
        state: absent
        force: true
      register: absent_vm
      delegate_to: localhost
12:28:10  fatal: [engineering-services-template]: FAILED! => changed=false 
12:28:10    module_stderr: |-
12:28:10      pyVmomi.VmomiSupport.InvalidPowerState: (vim.fault.InvalidPowerState) {
12:28:10         dynamicType = <unset>,
12:28:10         dynamicProperty = (vmodl.DynamicProperty) [],
12:28:10         msg = 'The attempted operation cannot be performed in the current state (Powered off).',
12:28:10         faultCause = <unset>,
12:28:10         faultMessage = (vmodl.LocalizableMessage) [],
12:28:10         requestedState = 'poweredOn',
12:28:10         existingState = 'poweredOff'
12:28:10      }
12:28:10    
12:28:10      The above exception was the direct cause of the following exception:
12:28:10    
12:28:10      Traceback (most recent call last):
12:28:10        File "/home/jenkins/.ansible/tmp/ansible-tmp-1617823633.3834925-24521-17778655727483/AnsiballZ_vmware_guest.py", line 249, in <module>
12:28:10          _ansiballz_main()
12:28:10        File "/home/jenkins/.ansible/tmp/ansible-tmp-1617823633.3834925-24521-17778655727483/AnsiballZ_vmware_guest.py", line 239, in _ansiballz_main
12:28:10          invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
12:28:10        File "/home/jenkins/.ansible/tmp/ansible-tmp-1617823633.3834925-24521-17778655727483/AnsiballZ_vmware_guest.py", line 110, in invoke_module
12:28:10          runpy.run_module(mod_name='ansible_collections.community.vmware.plugins.modules.vmware_guest', init_globals=None, run_name='__main__', alter_sys=True)
12:28:10        File "/home/jenkins/.pyenv/versions/3.9.0/lib/python3.9/runpy.py", line 210, in run_module
12:28:10          return _run_module_code(code, init_globals, run_name, mod_spec)
12:28:10        File "/home/jenkins/.pyenv/versions/3.9.0/lib/python3.9/runpy.py", line 97, in _run_module_code
12:28:10          _run_code(code, mod_globals, init_globals,
12:28:10        File "/home/jenkins/.pyenv/versions/3.9.0/lib/python3.9/runpy.py", line 87, in _run_code
12:28:10          exec(code, run_globals)
12:28:10        File "/tmp/ansible_community.vmware.vmware_guest_payload_ajy_ual9/ansible_community.vmware.vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 3549, in <module>
12:28:10        File "/tmp/ansible_community.vmware.vmware_guest_payload_ajy_ual9/ansible_community.vmware.vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/modules/vmware_guest.py", line 3487, in main
12:28:10        File "/tmp/ansible_community.vmware.vmware_guest_payload_ajy_ual9/ansible_community.vmware.vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py", line 848, in set_vm_power_state
12:28:10        File "/tmp/ansible_community.vmware.vmware_guest_payload_ajy_ual9/ansible_community.vmware.vmware_guest_payload.zip/ansible_collections/community/vmware/plugins/module_utils/vmware.py", line 83, in wait_for_task
12:28:10        File "<string>", line 3, in raise_from
12:28:10      ansible_collections.community.vmware.plugins.module_utils.vmware.TaskError: ('The attempted operation cannot be performed in the current state (Powered off).', None)
12:28:10    module_stdout: ''
12:28:10    msg: |-
12:28:10      MODULE FAILURE
12:28:10      See stdout/stderr for the exact error
12:28:10    rc: 1

timblaktu avatar Apr 07 '21 19:04 timblaktu

My ansible-vsphere project continues to run in its hacky-error-handler state described above. I suspect this issue is another case of the vmware_guest module having undocumented requirements that limit the scope of changes one can make in a single task invocation? (Like this)

timblaktu avatar Apr 14 '21 13:04 timblaktu

I encountered this today with 3.2.0 and in my case, traced it down to the trailing / in the folder setting. The playbook worked with 2.3.

In ansible_collections/community/vmware/plugins/module_utils/vmware.py, this test line 1112 elif self.params['folder'] in actual_vm_folder_path in get_vm() failed because actual_vm_folder_path was /some/absolute/path/directory and self.params['folder'] was /path/directory/ . The trailing / caused the check to fail.

This caused get_vm() to return None instead of the vm object, which in turn caused the vmware_guest() to try to recreate the vm.

Removing the / from the config solved the issue for me.

nicko2n avatar Jun 09 '21 00:06 nicko2n

I appear to be having this issue as well. I did some poking and what I am seeing is that ansible is sending the vm creation command twice in the task. I even tried to put a "pause" hoping that would be a good work around but that didnt work either. I used

export ANSIBLE_KEEP_REMOTE_FILES=1

to keep the temp files and I see in the output two files are created and run at the same time:


<localhost> EXEC /bin/sh -c '/usr/bin/python3 /home/lpereira/.ansible/tmp/ansible-tmp-1626976231.4207983-1031-237599583159971/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /home/lpereira/.ansible/tmp/ansible-tmp-1626976231.4015949-1030-220164265489253/AnsiballZ_vmware_guest.py && sleep 0'

Both of these files have the exact params. Now I dont know how to debug this to find out why its generating two but I know the module itself is ok because if I build a args.json with the parameters and run that it works fine.

lpereira1 avatar Jul 22 '21 18:07 lpereira1

I have found a duct tape work around. What I have been successfully able to do it make the first VM in the loop be a dummy(configured but a throwaway). If I do this all the other vms work fine. It looks like the first run of the module is pushing twice but all subsequent items in the loop run with a single run.

First item in loop:

Using module file /home/lpereira/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_guest.py
Using module file /home/lpereira/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_guest.py
<localhost> PUT /home/lpereira/.ansible/tmp/ansible-local-3474aszy3v_u/tmpb1sdwmiu TO /home/lpereira/.ansible/tmp/ansible-tmp-1626986720.9811466-3481-173092559806086/AnsiballZ_vmware_guest.py
<localhost> PUT /home/lpereira/.ansible/tmp/ansible-local-3474aszy3v_u/tmp83doteiw TO /home/lpereira/.ansible/tmp/ansible-tmp-1626986721.0273826-3482-141520763408454/AnsiballZ_vmware_guest.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/lpereira/.ansible/tmp/ansible-tmp-1626986720.9811466-3481-173092559806086/ /home/lpereira/.ansible/tmp/ansible-tmp-1626986720.9811466-3481-173092559806086/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c 'chmod u+x /home/lpereira/.ansible/tmp/ansible-tmp-1626986721.0273826-3482-141520763408454/ /home/lpereira/.ansible/tmp/ansible-tmp-1626986721.0273826-3482-141520763408454/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /home/lpereira/.ansible/tmp/ansible-tmp-1626986720.9811466-3481-173092559806086/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /home/lpereira/.ansible/tmp/ansible-tmp-1626986721.0273826-3482-141520763408454/AnsiballZ_vmware_guest.py && sleep 0'

second and third item loop I tested:

redirecting (type: modules) ansible.builtin.vmware_guest to community.vmware.vmware_guest
Using module file /home/lpereira/.ansible/collections/ansible_collections/community/vmware/plugins/modules/vmware_guest.py
<localhost> PUT /home/lpereira/.ansible/tmp/ansible-local-3474aszy3v_u/tmp1n8but6e TO /home/lpereira/.ansible/tmp/ansible-tmp-1626986720.9811466-3481-173092559806086/AnsiballZ_vmware_guest.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/lpereira/.ansible/tmp/ansible-tmp-1626986720.9811466-3481-173092559806086/ /home/lpereira/.ansible/tmp/ansible-tmp-1626986720.9811466-3481-173092559806086/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /home/lpereira/.ansible/tmp/ansible-tmp-1626986720.9811466-3481-173092559806086/AnsiballZ_vmware_guest.py && sleep 0'

lpereira1 avatar Jul 22 '21 20:07 lpereira1

I found this issue as well when I set the folder: /{{ vcenter_datacenter }}/vm/, but it returns ok: [localhost] when I set folder: / 🤔

EDIT: Ah, I just noticed https://github.com/ansible-collections/community.vmware/issues/156#issuecomment-857283787, and after changing my folder variable to /{{ vcenter_datacenter }}/vm everything works fine. Maybe the code should check for a trailing / and trim it? Or just update the documentation to advise on this?

bhundven avatar Jun 02 '22 21:06 bhundven