terraform-provider-proxmox
terraform-provider-proxmox copied to clipboard
Add support for setting `hostpci` devices to a VM
Is your feature request related to a problem? Please describe.
When I provision VMs via terraform that require some PCI device from the host to be passed through, I have to configure said device post-provisioning with Ansible or something similar.
Describe the solution you'd like
Just like we have the disk or network blocks, we should also have hostpci block in which we can provide the device IDs to be passed through to the VM.
Describe alternatives you've considered
N/A
Additional context
N/A
Hi @mirceanton, thanks for submitting this. I don't use PCI pass-through in my environment, so I'm lacking some context here. Could you please provide some examples of the configuration you have to do, or references to PVE documentation that describes what needs to be done to configure that?
Hi, @bpg! Sure.
So basically, I am creating a VM Template with Packer (to automatically go through the OS install) and then cloning that template via Terraform, with your provider.
At the end of the process, my vm file looks like this:
root@bingus: cat /etc/pve/qemu-server/105.conf
agent: enabled=0,fstrim_cloned_disks=0,type=virtio
arch: x86_64
balloon: 0
boot: c
cores: 12
cpu: cputype=host
cpuunits: 1024
ide2: none,media=cdrom
kvm: 1
machine: q35
memory: 16384
meta: creation-qemu=7.1.0,ctime=1669555324
name: TrueNAS
net0: virtio=8A:6E:A1:52:69:BA,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-105-disk-0,iothread=0,size=16G
scsihw: virtio-scsi-pci
smbios1: uuid=4dc1e915-6aff-49e4-8844-3de0bd56150a
sockets: 1
tablet: 0
vga: memory=16,type=qxl
vmgenid: 0361d442-79b3-4065-8948-61b5fa14118a
After that, I go into the webUI of proxmox, go to my VM and in the Hardware tab, select Add > PCI Device and select my device from the list.
The VM config file now looks like this:
root@bingus: cat /etc/pve/qemu-server/105.conf
agent: enabled=0,fstrim_cloned_disks=0,type=virtio
arch: x86_64
balloon: 0
boot: c
cores: 12
cpu: cputype=host
cpuunits: 1024
hostpci0: 0000:17:00,pcie=1 # <-- note this hostpci0 entry
ide2: none,media=cdrom
kvm: 1
machine: q35
memory: 16384
meta: creation-qemu=7.1.0,ctime=1669555324
name: TrueNAS
net0: virtio=8A:6E:A1:52:69:BA,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-105-disk-0,iothread=0,size=16G
scsihw: virtio-scsi-pci
smbios1: uuid=4dc1e915-6aff-49e4-8844-3de0bd56150a
sockets: 1
tablet: 0
vga: memory=16,type=qxl
vmgenid: 0361d442-79b3-4065-8948-61b5fa14118a
The proxmox documentation for pci device passthrough can be found here.
Note that, at least in my original idea, we assume the user has a properly configure PVE host (so iommu and all the other things in the guide are already configured), so all that is left for this provider to do is to add the hostpci0: 0000:17:00,pcie=1 line in the config.
In terms of the format, I was thinking to add a hostpci block, something like
hostpci {
id = "0000:17:00"
rombar = true
pcie = true
primary_gpu = false
}
And for the output to the config file, something like (pseudocode/logic):
- for each hostpci block, increment the
counter - set `hostpci_line="hostpci
: " - if
rombaris false:hostpci_line += ,rombar=0 - if
pcieis true:hostpci_line += ,pcie=1 - if
primary_gpuis true:hostpci_line += ,x-vga=1

And with that, we would cover 99% of the use-cases, I would say
Awesome! Thanks for the details š
Let me know if there is any way I can help! I am not too familiar with go or writing terraform providers, but I do have quite a bit of coding experience
This feature is what Iām waiting for setting up a new server with proxmox. Thank you!
@numkem In the meantime, if you are ok with using multiple tools, you can take a look at how I solved this problem in my project here
Basically, I am using Ansible to apply Terraform and then modify the VM definition file manually to add the hostpci field. Maybe something like this could also work for you.
@mirceanton I appreciate the example but I think I might just import the resource when the provider supports it.
@bpg Thank you for implementing this! š