terraform-provider-proxmox
terraform-provider-proxmox copied to clipboard
proxmox_virtual_environment_vm: cdrom / disk import issue
Describe the bug
proxmox_virtual_environment_vm's disk size can be equal to 0; current limitations require it to be greater or equal to 1
To Reproduce Steps to reproduce the behavior:
- Create a vm (as root@pam) via the web-ui with an iso from local storage)
- create tf resource
tofu import proxmox_virtual_environment_vm.foo foo/100- tofo plan
- bug: see
~disk { ~size = 0 -> 8in plan output - try to edit size to 0
- tofo plan
- bug: see
Error: expected size to be at least (1), got 0
Please also provide a minimal Terraform configuration that reproduces the issue.
note the disk section
resource "proxmox_virtual_environment_vm" "foo" {
name = "foo"
node_name = "foo"
vm_id = 100
operating-system {
type = "126"
}
# iso
disk {
datastore_id = "local-btrfs"
file_format = "iso"
path_in_datastore = "iso/archlinux-2024.06.01-x86_64.iso" ## replace( proxmox_virtual_environment_download_file.foo.id, "/^[^:]+:/", "" ) # issue: #1371
interface = "ide2"
size = 0
}
vga {
memory = 16
type = "virtio"
}
scsi_hardware = "virtio-scsi-single"
agent {
enabled = false
}
bios = "seabios"
# memory { }
# network-device { }
}
running tofu plan yields:
Planning failed. OpenTofu encountered an error while generating this plan.
|
| Error: expected size to be at least (1), got 0
|
| with proxmox_virtual_environment_vm.foo,
| on virtual_vms.tf line 98, in resource "proxmox_virtual_environment_vm" "foo":
| 98: size = 0
|
|
Expected behavior tofo plan to allow disk-size = 0
Additional context
tested patching proxmoxtf/resource/vm/disk/schema.go L134 ( ValidateDiagFunc: validation.ToDiagFunc(validation.IntAtLeast(1)), ) ; set the 1 to 0 and seems to make the issue go away (though I did not get to running tofo apply yet due to other fun things )
edit1: apply actualy gives like 50 "legacy plugin sdk" potential problems and eventually runs into an out-of-range panic (panic is also on stock 0.59.0 and digging into that
edit2: got tofu apply running with the L134 tweak above; trying to use the stock 0.59.0 with size = 0 fails, as well as setting size = 1 (due to it being an iso). (panic i got from edit1 was due to incomplete blocks i never cleaned up from example vm resource)
- Single or clustered Proxmox: single
- Proxmox version: 8.2.2
- Provider version (ideally it should be the latest version): 0.59.0
- Terraform/OpenTofu version: 1.7.1
- OS (where you run Terraform/OpenTofu from): archlinux
- Debug logs (
TF_LOG=DEBUG terraform apply): n/a
Curious about the use case of having 0-size disks 🤔
PVE does not allow it:
its the ISO; when i did the import it put the iso under the disk option, set type to 'iso' and set size to '0'
Sorry, still didn't get it... If it is an .iso file, would you rather attach as a cdrom, and install from it?
EDIT: just saw your uploaded screenshot. Yes, it should be cdrom instead of disk.
well, gave it a go changing the disk block holding the iso to cdrom and got an error on apply (note this was imported origonally so the iso was set as a disk)
proxmox_virtual_environment_vm.foo: Modifying... [id=100]
|
| Error: deletion of disks not supported. Please delete disk by hand. Old interface was "ide2"
|
| with proxmox_virtual_environment_vm.foo,
| on virtual_vms.tf line 40, in resource "proxmox_virtual_environment_vm" "foo":
| 40: resource "proxmox_virtual_environment_vm" "foo" {
nuking that and starting fresh; a clean import and the next plan yields:
# proxmox_virtual_environment_vm.foo will be updated in-place
~ resource "proxmox_virtual_environment_vm" "foo" {
id = "100"
name = "foo"
tags = [
"archlinux",
"terraform",
]
# (27 unchanged attributes hidden)
+ cdrom {
+ enabled = true
+ file_id = "local-btrfs:iso/archlinux-2024.06.01-x86_64.iso"
+ interface = "ide2"
}
~ disk {
~ file_format = "iso" -> "raw"
~ interface = "ide2" -> "scsi0"
~ iothread = false -> true
~ path_in_datastore = "iso/archlinux-2024.03.29-x86_64.iso" -> "100/vm-100-disk-0.raw"
~ size = 0 -> 32
# (7 unchanged attributes hidden)
}
- disk {
- aio = "io_uring" -> null
- backup = true -> null
- cache = "none" -> null
- datastore_id = "local-btrfs" -> null
- discard = "ignore" -> null
- file_format = "raw" -> null
- interface = "scsi0" -> null
- iothread = true -> null
- path_in_datastore = "100/vm-100-disk-0.raw" -> null
- replicate = true -> null
- size = 32 -> null
- ssd = false -> null
}
# (5 unchanged blocks hidden)
}
ran the apply and it errors:
| Error: deletion of disks not supported. Please delete disk by hand. Old interface was "ide2"
|
| with proxmox_virtual_environment_vm.foo,
| on virtual_vms.tf line 40, in resource "proxmox_virtual_environment_vm" "foo":
| 40: resource "proxmox_virtual_environment_vm" "foo" {
|
and a subsequent plan + apply and all is healthy; i think this bug is now just an import issue
Hi, I changed the Telmate provider and when I want to delete the disks I get this:
│ Error: deletion of disks not supported. Please delete disk by hand.
Is this okay or is it a bug? I find it strange that I can't delete disks so I can recreate them again
Hi @gustavodrodriguez 👋🏼
This is a known limitation (hence the proper error message) due to the current VM resource format. Each disk block is represented as an item in an ordered list. If there is more than one disk in the list, removing a non-last item causes the following items to shift up.
This leads to major issues with state reconciliation. Imagine you have disks like ['scsi0', 'sata1']. If you remove 'scsi0' and add 'sata2' at the end, the new list would look like ['sata1', 'sata2']. From the provider's perspective, both disk devices appear to be updated: 'scsi0' -> 'sata1', 'sata1' -> 'sata2' (interfaces and all other disk attributes), which is obviously incorrect.
A proper fix for this issue will come with #1231, where I plan to change the disk format from a list to a map. Here is a prototype of the same change for cdrom.
Marking this issue as stale due to inactivity in the past 180 days. This helps us focus on the active issues. If this issue is reproducible with the latest version of the provider, please comment. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!