community.vmware
community.vmware copied to clipboard
vmware_vmotion specify Destination Network
SUMMARY
The vmware_vmotion module requires the functionality to select the target network in some corner cases.
"msg": "(\"Network interface 'Network adapter 1' uses network 'DVSwitch: xx xx xx xx xx xx xx', which is not accessible.\", None)"
ISSUE TYPE
- Feature Idea
COMPONENT NAME
vmware_vmotion
ADDITIONAL INFORMATION
NEW destination_network: "{{ network_name }}"
- name: Perform storage vMotion and host vMotion of virtual machine
community.vmware.vmware_vmotion:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: yes
destination_datacenter: "{{ datacenter_name_destination }}"
vm_uuid: "{{ vm_facts.instance.hw_product_uuid }}"
destination_host: "{{ host_destination.name }}"
destination_datastore: "{{ recommended_datastore_destination.recommended_datastore }}"
destination_network: "{{ network_name }}"
delegate_to: localhost
I also get this error:
"msg": "(\"Currently connected network interface 'Network adapter 1' uses network 'DVSwitch: xx xx xx xx xx xx xx xx-xx xx xx xx xx xx xx xx', which is not accessible.\", None)"
My portgroup naming convention has the cluster name appended to the end of our portgroup names. Sadly we can't use this module at the moment if we need to vmotion a VM to another cluster.
I can do the migration after removing the network. Has anyone found another solution without removing the network?
I don't think that a simple destination_network: "{{ network_name }}"
is enough. After all, a VM might have more than one vNIC and therefor possibly require more than one destination network. That makes a solution more complex.
@mariolenz in the same way that we inserted the network option when cloning a virtual machine, could you have this option in the vmotion module, no?
Hi everyone,
Is there any other option ? I want to migrate vms between dc. I have the same issue as @vMarkusK "msg": "(\"Network interface 'Network adapter 1' uses network 'DVSwitch: xx xx xx xx xx xx xx', which is not accessible.\", None)"
I created a playbook that temporarily migrates the VM to a standard PG, does the vMotion and cleans up the workaround afterwards.
@vMarkusK In your case, is it a vmotion in the same cluster ?
No, it's another cluster, dvs and Datacenter
added a local portgroup in dc1 and dc2, the same name, configure vm with this local portgroup, vmotion the vm and reconfigure with the right network ?
Right. Temporarily PG only on the source an destination host and a cleanup afterwards.
I tried your workaround, the error still here for me. I used the PGs "VM Network" and "Local Portroup".
can't create new local PG in my vsphere, only dvs. And it's the same with dvs.