community.vmware icon indicating copy to clipboard operation
community.vmware copied to clipboard

Enable Migration of VSS with single Uplink to DVS with single Uplink

Open PrymalInstynct opened this issue 2 years ago • 2 comments

SUMMARY

It is possible to migrate a Host with a single vmnic uplink from its default vSwitch0 'VM Network' to a Distributed vSwitch without losing connectivity to Host via the vCenter vSphere Web Client. This function is not possible through the vmware_dvs_host vmware_vswitch vmware_migrate_vmk modules. Each of these modules appear to have safe guards that prevent removing a vmnic from a switch if it is the only vmnic assigned.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

Explored modules vmware_dvs_host vmware_vswitch vmware_migrate_vmk

Suggested Module Name vmware_host_migrate_vss_dvs

ADDITIONAL INFORMATION

This feature would be used to migrate a Host with its uplinks between Virtual Switches (standard and/or distributed)

Possible module usage examples

- name: Migrate vSwitch0 to dvSwitch0 with single UpLink
  community.vmware.vmware_host_migrate_vss_dvs:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    esxi_hostname: "{{ esxi_hostname }}"
    current_switch_name: vSwitch0
    current_portgroup_name: 'VM Network'
    migrate_switch_name: dvSwitch0
    migrate_portgroup_name: Management
    vmnics:
        - vmnic0
  delegate_to: localhost

- name: Migrate vSwitch0 to dvSwitch0 with Multiple UpLinks & LAG
  community.vmware.vmware_host_migrate_vss_dvs:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    esxi_hostname: "{{ esxi_hostname }}"
    current_switch_name: vSwitch0
    current_portgroup_name: 'VM Network'
    migrate_switch_name: dvSwitch0
    migrate_portgroup_name: Management
    lag_uplinks:
        - lag: lag1
          vmnics:
              - vmnic0
              - vmnic1
        - lag: lag2
          vmnics:
              - vmnic2
              - vmnic3
  delegate_to: localhost

PrymalInstynct avatar Aug 31 '22 21:08 PrymalInstynct

I just ran into this issue too. I can reconfigure the uplink from vSwtich0 to a distributed virtual switch via the gui but get an error when I use the vmware_dvs_host module. This would be a great feature/module.

JoeHazurmoney avatar Sep 01 '22 15:09 JoeHazurmoney

This is achievable through PowerCLI as well as pyvmomi directly - would sweet to see this implemented into a module.

HerbBoy avatar Sep 06 '22 16:09 HerbBoy

This is achievable through PowerCLI as well as pyvmomi directly - would sweet to see this implemented into a module.

Are you sure this can be done with pyVmomi? Do you have some example code, by any chance?

mariolenz avatar Sep 23 '22 08:09 mariolenz

I have run into this issue as well. What I have done as a workaround is the perform the following: Create the VDS Create DPG Add the Hosts to the mgmt VDS excluding vmnic0, add only using vmnic1 Migrate vmk0 from the VSS to the VDS Remove the VSS Add vmnic0 to the VDS

When you add vnic0, ensure you list vmnic1 as well and it will configure the correct order of the uplinks

I am however running into another issue while attempting to apply this configuration via a workflow template. I am receiving: Cannot complete operation due to concurrent modification by another operation within vSphere.

I am only running the playbook against 3 hosts in a cluster.

Any suggestions?

dk-rbrown avatar Nov 11 '22 16:11 dk-rbrown

I am receiving: Cannot complete operation due to concurrent modification by another operation within vSphere.

You have to add 'throttle: 1' to your task. The is necessary because you can only add one Host by one to a DVS and ansible runs a tasks typically on 4 hosts parallel.

JoschuaA4 avatar Nov 21 '22 08:11 JoschuaA4

That feature would really be helpful. We sometimes have new locations where cabling is not complete but we need to start configuration with just a single interface connected to a switch. Not being able to move the vmk ports for mgmt traffic always breaks the deployment.

ralfgro avatar Jun 06 '23 13:06 ralfgro

@JoschuaA4 I have set all my playbooks related to vsphere to throttle at 1. vsphere will still not be happy with the loss of connectivity to a host. I have created a workflow order to work around this: Create new VDS Create/Modify DPG Add hosts to all VDS Migrate the mgmt vmk Remove VSS Migrate vmnic0

dk-rbrown avatar Jun 06 '23 14:06 dk-rbrown

but with just one vmnic it will break after "migrate mgmt vmk" or does throttle help?

ralfgro avatar Jun 06 '23 15:06 ralfgro