terraform-provider-proxmox
terraform-provider-proxmox copied to clipboard
`boot_order` is ignored when clonning a VM
Discussed in https://github.com/bpg/terraform-provider-proxmox/discussions/828
Originally posted by loganmancuso December 21, 2023
Problem Statement:
I am creating a vm to act as my zfs server and I am attaching the raw disks to the instance, no issues there. However, when you do that the boot order becomes unpredictable and so I need to set it to the OS disk in this case 'virtio0'. I know there is a boot_order parameter and I am setting it but it doesnt seem to do anything. Thanks for any assistance!
TLDR:
im having some trouble with the boot_order, how can i go about figuring out why its not being set on creation of the instance? Is anyone else trying to set the boot order and finding it to not be workinng?
Code:
this is my instance block, pretty basic and the boot order is set to virtio0, however on deployment it doesn't set the boot order in proxmox.
resource "proxmox_virtual_environment_vm" "thoth" {
# Instance Description
name = local.vm_name
description = "# Thoth Data Server \n## ${local.vm_name}"
tags = concat(local.default_tags, ["infra"])
node_name = local.node_name
vm_id = local.vm_id
# Instance Config
clone {
vm_id = local.vm_template_id
}
on_boot = true
startup {
order = local.vm_id
up_delay = "60"
down_delay = "60"
}
operating_system {
type = "l26"
}
agent {
enabled = true
}
boot_order = ["virtio0"] # 10.29.23 right now boot_order doesnt set the parameter on creation must set manually
# Instance Hardware
cpu {
architecture = "x86_64"
cores = 1
type = "x86-64-v2-AES"
}
memory {
dedicated = 16384
}
vga {
type = "qxl"
}
network_device {
bridge = "vmbr0"
vlan_id = 10
firewall = true
}
disk {
datastore_id = "local-lvm"
size = 32
interface = "virtio0"
file_format = "raw"
discard = "ignore"
}
serial_device {}
# Attach raw disks for passthrough
# dynamic "disk" {
# for_each = var.disks
# content {
# datastore_id = disk.value
# # cache = "none"
# file_format = "raw"
# interface = "scsi${index(var.disks,disk.value)}"
# # iothread = false
# path_in_datastore = "/dev/disk/by-id/${disk.value}"
# # size = 9314 -> null
# # ssd = false
# }
# }
# Instance CloudConfig Bootstrap
initialization {
ip_config {
ipv4 {
address = "${local.ip_addr}/24"
gateway = local.vpc_gateway_network_ip
}
}
user_data_file_id = proxmox_virtual_environment_file.bootstrap.id
}
provisioner "file" {
when = create
content = templatefile(local.bootstrap_src,
{
log_dst = "/var/log/tofu/bootstrap.log"
}
)
destination = local.bootstrap_dst
connection {
type = "ssh"
user = local.instance_credentials.username
private_key = file("~/.ssh/id_ed25519")
host = local.ip_addr
}
}
lifecycle {
ignore_changes = [disk]
}
}
Workaround:
right now my solution is to run a command on the proxmox host to execute a qm set on the vm after its been created, then on next reboot the instance picks up the change and it works
resource "terraform_data" "bootstrap_instance" {
depends_on = [proxmox_virtual_environment_vm.thoth, terraform_data.attach_disks]
triggers_replace = [md5(file(local.bootstrap_src))]
provisioner "local-exec" {
command = "ssh -t root@${local.node_ip} 'qm set ${local.vm_id} --boot order=virtio0'"
}
provisioner "local-exec" {
command = local.bootstrap_cmd
}
}
this is what it looks like after the terraform_data resource runs, it shows in proxmox as a pending change that is picked up on reboot. Id just like for it to be set on creation. Its probably a limitation of the proxmox api but I cant find any documentation in the proxmox docs to indicate that it cant be set on creation.
Provisioner Details
tf -v
OpenTofu v1.6.0-rc1
on linux_amd64
+ provider registry.opentofu.org/bpg/proxmox v0.38.1
+ provider registry.opentofu.org/hashicorp/null v3.2.2
+ provider registry.opentofu.org/hashicorp/vault v3.23.0
@all-contributors please add @loganmancuso for bug
Marking this issue as stale due to inactivity in the past 180 days. This helps us focus on the active issues. If this issue is reproducible with the latest version of the provider, please comment. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!