terraform-provider-vsphere icon indicating copy to clipboard operation
terraform-provider-vsphere copied to clipboard

`network_interface` in the wrong order of IP configuration for `r/virtual_machine`

Open skydion opened this issue 4 years ago • 2 comments

Terraform Version

Terraform v0.12.29

vSphere Provider Version

provider.vsphere v1.19.0

Affected Resource(s)

vsphere_virtual_machine

Terraform Configuration Files

> var.cp_ip_addresses
{
  "0public" = [
    "69.168.x.y",
  ]
  "1mgmt" = [
    "192.168.16.5",
  ]
  "3appliance" = [
    "192.168.32.5",
  ]
  "4provisioning" = [
    "192.168.40.5",
  ]
  "5provisioning" = [
    "192.168.40.100",
  ]
  "6appliance" = [
    "192.168.32.100",
  ]
}
locals {
  cp_name  = ["tfcp10"]
  cp_count = contains(keys(var.cp_ip_addresses), "1mgmt") == true ? length(var.cp_ip_addresses["1mgmt"]) : 0

  cp_available_networks = {
    "0public"       = local.netids["0public"]
    "1mgmt"         = local.netids["1mgmt"]
    "3appliance"    = local.netids["3appliance"]
    "4provisioning" = local.netids["4provisioning"]
    "5provisioning" = local.netids["4provisioning"]
    "6provisioning" = local.netids["3appliance"]
  }

  cp_public_net = lookup(local.cp_available_networks, "0public", "")
}

resource "vsphere_virtual_machine" "cp" {
  count            = local.cp_count
  name             = local.cp_name[count.index]
  num_cpus         = var.num_cpus["control_panel"][count.index]
  memory           = var.memory["control_panel"][count.index]
  resource_pool_id = data.vsphere_resource_pool.pool.id
  datastore_id     = data.vsphere_datastore.datastore.id
  guest_id         = data.vsphere_virtual_machine.template.guest_id
  scsi_type        = data.vsphere_virtual_machine.template.scsi_type

  dynamic "network_interface" {
    for_each = {
      for key, value in local.cp_available_networks :
      key => value
      if value != ""
    }

    content {
      network_id   = network_interface.value
      adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]
    }
  }

  disk {
    label            = "disk0"
    size             = data.vsphere_virtual_machine.template.disks.0.size
    thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
  }

  clone {
    template_uuid = data.vsphere_virtual_machine.template.id

    customize {
      linux_options {
        host_name = local.cp_name[count.index]
        domain    = var.domain
      }

      dynamic "network_interface" {
        for_each = {
          for key, value in var.cp_ip_addresses :
          key => value[count.index]
        }

        content {
          ipv4_address = network_interface.value
          ipv4_netmask = 24
        }
      }

      ipv4_gateway    = local.cp_public_net != "" ? var.gateways["public"] : var.gateways["mgmt"]
      dns_server_list = var.dns
    }
  }
}

Debug Output

Panic Output

Expected Behavior

I except than inside VM interfaces will have IP addresses like decribed in terraform output

            network_interface {
                ipv4_address = "69.168.x.y"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.16.5"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.32.5"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.40.5"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.40.100"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.32.100"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }

Actual Behavior

But I have like this

192.168.40.5
69.168.x.y
192.168.40.100
192.168.16.5
192.168.32.100
192.168.32.5
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:66:e9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.5/24 brd 192.168.40.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:66e9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:e1:98 brd ff:ff:ff:ff:ff:ff
    inet 69.168.x.y/24 brd 69.168.x.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:e198/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:db:2f brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.100/24 brd 192.168.40.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:db2f/64 scope link 
       valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:17:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.5/24 brd 192.168.16.255 scope global noprefixroute eth3
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:1711/64 scope link 
       valid_lft forever preferred_lft forever
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:8d:e4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.100/24 brd 192.168.32.255 scope global noprefixroute eth4
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:8de4/64 scope link 
       valid_lft forever preferred_lft forever
7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:97:c7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.5/24 brd 192.168.32.255 scope global noprefixroute eth5
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:97c7/64 scope link 
       valid_lft forever preferred_lft forever

Steps to Reproduce

Important Factoids

References

  • #0000

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

skydion avatar Jul 28 '20 09:07 skydion

@skydion - are you still seeing this issue?

The network_interface should retain the ordering as they are a TypeList.

Can you provide a redacted but reusable version of your configuration for reproduction?

Ryan

tenthirtyam avatar Feb 26 '22 00:02 tenthirtyam

If an OVA has two or more networks, when deployed, potentially the wrong networks are assigned to the adapters. Confirmed with multiple provider versions including the latest 2.1.0.

Example OVA used - VMware Data Management for VMware Tanzu - DMS Provider OVA - dms-provider-va-1.1.0.1577-18978276.ova

Example code can be found here

In the below code, the wrong network labels are configured for the two networks mapped in my OVA file. When the VM is powered on, network connectivity cannot be made (ping), but if I go and manually edit the VM properties and change the VM networks around, the VM now responds to ping.

data "vsphere_ovf_vm_template" "ovf" {

  name             = "${var.name}"
  resource_pool_id = "${var.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"
  host_system_id   = "${data.vsphere_host.host.id}"
  local_ovf_path   = "${var.local_ovf_path}"
  ovf_network_map = {
    "Management Network": "${data.vsphere_network.mgmt_network.id}"
    "Control Plane Network": "${data.vsphere_network.control_plane_network.id}"
    }
  }

resource "vsphere_virtual_machine" "vm" {
  name             = "${var.name}"
  num_cpus         = 8
  memory           = 16384
  resource_pool_id = "${var.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"
  folder           = "${var.folder}"
  wait_for_guest_net_timeout = 0
  wait_for_guest_ip_timeout  = 0
  datacenter_id    = "${data.vsphere_datacenter.dc.id}"
  host_system_id   = "${data.vsphere_host.host.id}"

  dynamic "network_interface" {
    for_each = "${data.vsphere_ovf_vm_template.ovf.ovf_network_map}"
    content {
      network_id = network_interface.value
    }
  }

  ovf_deploy {
    ovf_network_map = "${data.vsphere_ovf_vm_template.ovf.ovf_network_map}"
    local_ovf_path = "${data.vsphere_ovf_vm_template.ovf.local_ovf_path}"
    disk_provisioning    = "thin"
   }

saintdle avatar Mar 01 '22 18:03 saintdle