community.vmware icon indicating copy to clipboard operation
community.vmware copied to clipboard

vmware_guest error: Unable to access the virtual machine configuration: Unable to access file

Open 71Imapla opened this issue 3 years ago • 5 comments

SUMMARY

When attempting to provision a new vmware VM using the vmware_guest module to VSAN storage, I am getting the following error:

msg": "Failed to create a virtual machine : Unable to access the virtual machine configuration: Unable to access file [some_other_non_vsan_storage_cluster_not_connected_to_ESXI_Cluster] Linux_Base_Template/Linux_Base_Template.vmtx".

Same problem documented in: Ansible Issue: 28649 --- I have attempted the fix that is documented there by added a disk section ( see above ) with the datastore variable ( we have many datastores ). The templates live on a separate storage system NOT connected to ESXi Clusters that are configured for VSAN.

This works perfectly by manually cloning via vCenter to the target ESXi Cluster.
This works perfectly if I am provisioning to ESXi clusters that have the "[some_other_non_vsan_storage_cluster_not_connected_to_ESXI_Cluster] " connected to it.
This works perfectly by executing vmware powercli NEW-VM command here is the code for that:
https://thesleepyadmins.com/2018/09/08/deploy-multiple-vms-using-powercli-and-vmware-template
ISSUE TYPE
  • Bug Report
COMPONENT NAME

vmware_guest

ANSIBLE VERSION

Ansible Version: 2.11.6
AWX Version: 19.5.0
Python Version; 3.8.6

COLLECTION VERSION

CONFIGURATION

OS / ENVIRONMENT

AWX Operator version 0.15 K8 OS: Ubuntu 20.04 LTS OS to be Provisioned: Linux or Windows

STEPS TO REPRODUCE

Attempt to provision from template on storage not directly connected to VSAN configured ESXi Cluster


- name: Clone Virtual Machine(s) from Template
  vmware_guest:
    name: "{{ item | upper }}"
    hostname: "{{ provision_vcenter_hostname }}"
    username: "{{ vcenter_user }}"
    password: "{{ vcenter_user_pswd }}"
    annotation: "{{ provision_notes }}"
    template: "{{ vmtemplate }}"
    datacenter: "{{ provision_vc_datacenter }}"
    folder: "{{ ansible_facts.vmfolder }}"
    cluster: "{{ provision_cluster_target }}"
    disk:
    - size_gb: 50
      type: thin
      datastore:  "{{ provisioning_datastore | string }}"
    hardware:
      num_cpus: "{{ provision_virtual_cpu }}"
      memory_mb:  "{{ vmmemgb }}"
      num_cpu_cores_per_socket: "{{ provision_virtual_cpu }}"
      hotadd_cpu: False
      hotremove_cpu: False
      hotadd_memory: False
      nested_virt: False
      scsi: 'paravirtual'
    networks:
    - name: "{{ network_zone }}"
      type: "static"
      ip: "{{ provision_ip_address }}"
      netmask: "{{ subnet_mask }}"
      gateway: "{{ default_gw }}"
      device_type: 'vmxnet3'
      dvswitch_name: "{{ vcenter_dvswitch }}"
      start_connected: True
    wait_for_ip_address: yes
    wait_for_customization: yes
    customization:
      domain: "{{ provisioning_ad_domain | lower }}"
      dns_servers: 987.65.43.60
      dns_suffix:
        - devnull.blackhole.com
    cdrom:
      type: 'none'
    force: yes
  with_items:
    - "{{ clone }}"
  delegate_to: "localhost"
  register: nice_n_toasty_baked_vm

*** Truncated ****

EXPECTED RESULTS

Provisioning from Storage that is NOT attached to VSAN configured ESXi Cluster is successful.

ACTUAL RESULTS

msg": "Failed to create a virtual machine : Unable to access the virtual machine configuration: Unable to access file [some_other_non_vsan_storage_cluster_not_connected_to_ESXI_Cluster] Linux_Base_Template/Linux_Base_Template.vmtx".

  "invocation": {
    "module_args": {
      "name": "TMPBLDLNXLL0001",
      "hostname": "vCenterServer",
      "username": "XXXXXXXXXX",
      "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
      "annotation": "Test VSAN provisioning",
      "template": "Linux-8_Base_Template",
      "datacenter": "devnulldc",
      "folder": "devnulldc/vm/Sandbox",
      "cluster": "CL-03",
      "disk": [
        {
          "size_gb": 50,
          "type": "thin",
          "datastore": "['vSAN']",
          "autoselect_datastore": null,
          "controller_number": null,
          "controller_type": null,
          "disk_mode": null,
          "filename": null,
          "size": null,
          "size_kb": null,
          "size_mb": null,
          "size_tb": null,
          "unit_number": null
        }
      ],
      "hardware": {
        "num_cpus": 2,
        "memory_mb": 4096,
        "num_cpu_cores_per_socket": 2,
        "hotadd_cpu": false,
        "hotremove_cpu": false,
        "hotadd_memory": false,
        "nested_virt": false,
        "scsi": "paravirtual",
        "boot_firmware": null,
        "cpu_limit": null,
        "cpu_reservation": null,
        "max_connections": null,
        "mem_limit": null,
        "mem_reservation": null,
        "memory_reservation_lock": null,
        "secure_boot": null,
        "version": null,
        "virt_based_security": null,
        "iommu": null
      },
      "networks": [
        {
          "name": "BlackholeNet",
          "type": "static",
          "ip": "987.65.43.21",
          "netmask": "255.255.255.0",
          "gateway": "987.65.43.1",
          "device_type": "vmxnet3",
          "dvswitch_name": "vDSwitch",
          "start_connected": true
        }

*** Truncated ***

71Imapla avatar Jan 25 '22 22:01 71Imapla

Same issue here

Zheer09 avatar Nov 07 '23 08:11 Zheer09

Same issue here. File exists, datastore is connected, I can query info about the datastore, I can create files on the datastore, I can query info about the very template, to no avail.

MugBuffalo avatar May 24 '24 12:05 MugBuffalo

Ok I solved it by specifying esxi_hostname: 1.2.3.4 in the vmware_guest module call.

Obviously if this isn't used, the first sorted host in the cluster is chosen. And if this host can't access the datastore, then the template can't be cloned.

MugBuffalo avatar May 24 '24 13:05 MugBuffalo

Obviously if this isn't used, the first sorted host in the cluster is chosen. And if this host can't access the datastore, then the template can't be cloned.

It's best practice that all ESXi hosts in a cluster are similar. This includes hardware, firmware, ESXi version, configuration and maybe some other things that I don't remember at the moment. Of course, this includes datastores.

This is an example of a corner case where I don't know if we should try to implement a work-around or where we should just tell people to follow follow the best practice. Don't get me wrong, there might be good reasons to not follow the best practice. But implementing corner cases is always error-prone, and I'm not sure if we should do it.

@71Imapla Out of curiosity, did you have a similar configuration? That is, a template on a datastore that wasn't accessible by all ESXi hosts in a cluster?

mariolenz avatar May 26 '24 16:05 mariolenz

It's best practice that all ESXi hosts in a cluster are similar. This includes hardware, firmware, ESXi version, configuration and maybe some other things that I don't remember at the moment. Of course, this includes datastores.

Certainly. However I haven't found a way to mount "remote" VMFS datastore on other hosts in the cluster yet on ESXI / vCenter Server 8. For NFS Shares this works just fine.

This is an example of a corner case where I don't know if we should try to implement a work-around or where we should just tell people to follow follow the best practice. Don't get me wrong, there might be good reasons to not follow the best practice. But implementing corner cases is always error-prone, and I'm not sure if we should do it.

Perhaps suggesting this ("can host X access datastore Y?") in the error message would be enough.

MugBuffalo avatar May 27 '24 07:05 MugBuffalo