terraform-provider-vcd
terraform-provider-vcd copied to clipboard
allow internal_disk creation at VM creation time
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Description
It would be valuable if new internal disk(s) could be added at VM creation time.
I have a disk initialization script that the VM itself executes at startup time. However, when vcd_vapp_vm
has power_on = true
, the disk initialization script could potentially run prior to the creation of the vcd_vm_internal_disk
resources. So, the script would be ineffective.
Currently, my idea is to set power_on = false
and then use a "local-exec" provisioner to power-on the VM after the disks are created (the provisioner would depend on the vcd_vm_internal_disk
resource). Please see the example below.
Independent/named disks aren't an option for me as I need to allow snapshots.
Would it be possible to allow creation and attachment of internal disks within the vcd_vapp_vm
resource itself?
New or Affected Resource(s)
- vcd_vapp_vm
Terraform Configuration (if it applies)
resource "vcd_vapp_vm" "vapp_vm" {
vapp_name = var.vapp_name
name = var.vm_name
catalog_name = var.catalog_name
template_name = var.template_name
cpus = var.vm_cpus
memory = var.vm_memory * 1024
power_on = false
dynamic "network" {
for_each = var.org_networks
content {
type = "org"
name = network.value
ip_allocation_mode = var.org_networks_ip_allocation_mode
}
}
}
resource "vcd_vm_internal_disk" "internal_disk1" {
count = local.create_internal_disk1 ? 1 : 0
vapp_name = var.vapp_name
vm_name = vcd_vapp_vm.vapp_vm.name
bus_type = "paravirtual"
size_in_mb = var.disk1_size * 1024
bus_number = 1
unit_number = 0
}
resource "null_resource" "start_vm" {
# don't proceed with powering on the VM until the disk is created.
# Note, this dependency works even if the VM doesn't have an extra disk
depends_on = [vcd_vm_internal_disk.internal_disk1]
provisioner "local-exec" {
command = "${path.module}/start_vm.py.exe -v ${var.vm_name}
}
}
Hi @rdavisunr
Thank you for your idea. Naturally, we try to follow terraform perspective which dictates separate resource handling for better scripting and management capabilities(and avoiding big resources due to critical disadvantages). We will think about what we can do in this case.
Thank you @vbauzysvmware, I appreciate the consideration.
It appears the vsphere provider supports this use case through its disk block(s), so some precedent exists.
Thank you @rdavisunr for your reference. We can't always follow vSphere or other examples as API is different and has different capabilities. We will check the possibility.
I would like to suggest an additional use case where this feature would be beneficial.
Scenario: I encounter a situation where I need to create Session Hosts for use with Citrix Provisioning Services (PVS). To achieve this, I create empty VMs with a 50GB drive on Bus 0 Unit 0. However, a problem arises when the server attempts to PXE boot from PVS, resulting in a Blue Screen of Death (BSOD) error CTX229910. This issue is caused by incorrect assignment of pciSlotNumber values to ethernet0 and scsi0. According to the article, "ethernet0.pciSlotNumber" should be set to 192 and "scsi0.pciSlotNumber" should be set to 160. Unfortunately, when the VM is created with a NIC and powered on before attaching the disk, it causes ethernet0.pciSlotNumber to be assigned as 160, and scsi0.pciSlotNumber as 192.
As suggested by @rdavisunr, a workaround for this problem involves running a Terraform script with the "power_on" flag set to false and then using a script with local-exec to turn on the VM. Alternatively, a second Terraform run can be performed with the "power_on" flag set to true, which will power on the VMs. However, implementing these workarounds introduces an additional step and potential inconsistencies, or it may even cause downtime during the next Terraform run if the "power_on" flag is left as false.
@vbauzys Any progress on this issue?