terraform-provider-vsphere
terraform-provider-vsphere copied to clipboard
Updating the `datastore_id` on the `r/virtual_machine` does not apply to disk sub-resources
Terraform Version
13.5
vSphere Provider Version
1.24.2
Affected Resource(s)
-
vsphere_virtual_machine
Terraform Configuration Files
terraform {
required_providers {
vsphere = {
source = "hashicorp/vsphere"
version = "~> 1.24.2"
}
}
}
provider "vsphere" {
vsphere_server = var.vcenter
user = var.VSPHERE_USER
password = var.VSPHERE_PASSWORD
allow_unverified_ssl = true
}
data "vsphere_datacenter" "dc" {
name = var.datacenter
}
data "vsphere_compute_cluster" "cluster" {
name = var.cluster
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_datastore" "manual" {
name = var.datastore
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_network" "network" {
name = var.network
datacenter_id = data.vsphere_datacenter.dc.id
}
resource "vsphere_virtual_machine" "vm1" {
name = var.vmname
resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
datastore_id = data.vsphere_datastore.manual.id
num_cpus = 1
memory = 2048
guest_id = "other3xLinux64Guest"
wait_for_guest_net_timeout = 0
network_interface {
network_id = data.vsphere_network.network.id
adapter_type = "vmxnet3"
}
disk {
label = "disk0"
size = var.c_size
eagerly_scrub = false
thin_provisioned = true
unit_number = 0
}
}
Debug Output
https://gist.github.com/jweigand/174beffb17846c1cee70b5c804f59db7
Panic Output
Expected Behavior
datastore_id
should be changed for both the global value on the VM resource, and the disk subresource, causing a storage migration for both the general VM files, the disk VMDK file.
Actual Behavior
datastore_id
only changes for the global value, and only the general VM files are storage migrated. The disk subresource datastore_id
value does not think it should change, and the disk VMDK files are left on the original datastore.
# vsphere_virtual_machine.vm1 will be updated in-place
~ resource "vsphere_virtual_machine" "vm1" {
~ datastore_id = "datastore-1687" -> "datastore-82"
disk {
attach = false
controller_type = "scsi"
datastore_id = "datastore-1687"
Steps to Reproduce
-
terraform apply
to create the VM - Change the value of
var.datastore
to the name of a different datastore. -
terraform plan
\terraform apply
to see that the disk subresourcedatastore_id
value does not show it should change.
Important Factoids
This same issue happens when using SDRS datastore clusters, rather than individual datastores.
On the initial VM creation, this is shown in the trace (edited for brevity):
2020/12/04 14:31:53 [WARN] Provider "registry.terraform.io/hashicorp/vsphere" produced an unexpected new value for vsphere_virtual_machine.vm1, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .disk[0].datastore_id: was cty.StringVal("<computed>"), but now cty.StringVal("datastore-1687")
References
- #1125 (further down seems to show the same issue of the
datastore_id
value not changing for disks) - Provider Documentation specifically notes this should work:
Global datastore migration can be handled by changing the global datastore_id attribute. This triggers a storage migration for all disks that do not have an explicit datastore_id specified.
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
It appears as though this was introduced in v1.24.1 - it works correctly/as expected in v1.24.0, then stops working in v1.24.1 and v1.24.2.
Hello, there is a W/A for this that works with 1.24.0 and with latest (probably all in between), for every disk instance you have in your virtual machine please add "datastore_id" pointing with same info as for vm resource, here is example:
resource "vsphere_virtual_machine" "vm1" {
name = var.vmname
resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
datastore_id = data.vsphere_datastore.manual.id
num_cpus = 1
memory = 2048
guest_id = "other3xLinux64Guest"
wait_for_guest_net_timeout = 0
network_interface {
network_id = data.vsphere_network.network.id
adapter_type = "vmxnet3"
}
disk {
datastore_id = data.vsphere_datastore.manual.id
label = "disk0"
size = var.c_size
eagerly_scrub = false
thin_provisioned = true
unit_number = 0
}
}
👍🏻 to @siwyroot's suggestion as the datastore_id
is a valid argument to pass to each disk..
https://github.com/hashicorp/terraform-provider-vsphere/blob/814675b5da80f286a082664d22455b7c8264134a/vsphere/internal/virtualdevice/virtual_machine_disk_subresource.go#L63-L69
Ryan
👍🏻 to @siwyroot's suggestion as the
datastore_id
is a valid argument to pass to each disk..
We are specifying datastore_cluster_id
. In this case you can't specify a datastore_id
per disk, so the workaround does not work.
Hi @pascal-hofmann,
The suggested workaround is based on the information provided in the original issue description, which did not demonstrate the use of a datastore cluster.
data "vsphere_datastore" "manual" {
name = var.datastore
datacenter_id = data.vsphere_datacenter.dc.id
}
//…
resource "vsphere_virtual_machine" "vm1" {
name = var.vmname
resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
datastore_id = data.vsphere_datastore.manual.id
…
Ryan
I'm currently waiting to get access to a vSphere test setup, and will work on this issue once I have access (at least the datastore_cluster_id
part, but I guess both issues are related).
This functionality has been released in v2.3.0 of the Terraform Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.