terraform-provider-libvirt
terraform-provider-libvirt copied to clipboard
Fix cloud-init on aarch64
I played with this provider on aarch64 and I found out that cloud-init doesn't work there. Two fixes are needed:
- the cloud-init cdrom must be connected using scsi (AFAIK ide isn't an option on aarch64)
- since we are now using scsi, we also need to make sure to add the SCSI controller on aarch64
I successfully tested this change with the following terraform file:
terraform {
required_providers {
libvirt = {
source = "terraform.budai.cz/dmacvicar/libvirt"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
resource "libvirt_pool" "cluster" {
name = "terraform"
type = "dir"
path = "/var/lib/libvirt/terraform"
}
resource "libvirt_volume" "fedora34-qcow2" {
name = "fedora34.qcow2"
pool = libvirt_pool.cluster.name
source = "https://download.fedoraproject.org/pub/fedora/linux/releases/34/Cloud/aarch64/images/Fedora-Cloud-Base-34-1.2.aarch64.qcow2"
format = "qcow2"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_init.cfg")
}
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
user_data = data.template_file.user_data.rendered
pool = libvirt_pool.cluster.name
}
resource "libvirt_domain" "test" {
name = "fedora34"
memory = 1024
arch = "aarch64"
machine = "virt-5.2"
cpu {
mode = "host-passthrough"
}
cloudinit = libvirt_cloudinit_disk.commoninit.id
network_interface {
network_name = "default"
}
disk {
volume_id = libvirt_volume.fedora34-qcow2.id
}
}
If you are looking for a quick workaround for this issue, you can also use this following xslt (also tested):
<?xml version="1.0" ?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<!-- copy the whole xml doc to start with -->
<xsl:template match="node()|@*">
<xsl:copy>
<xsl:apply-templates select="node()|@*"/>
</xsl:copy>
</xsl:template>
<!-- replace <target dev='hdd'...> with <target dev='sda'...> -->
<xsl:template match="/domain/devices/disk[@device='cdrom']/target/@dev">
<xsl:attribute name="dev">
<xsl:value-of select="'sda'"/>
</xsl:attribute>
</xsl:template>
<!-- replace <target bus='ide'...> with <target bus='scsi'...> -->
<xsl:template match="/domain/devices/disk[@device='cdrom']/target/@bus">
<xsl:attribute name="bus">
<xsl:value-of select="'scsi'"/>
</xsl:attribute>
</xsl:template>
<xsl:template match="/domain/devices">
<devices>
<xsl:apply-templates/>
<controller type='scsi' index='0' model='virtio-scsi'>
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
</devices>
</xsl:template>
</xsl:stylesheet>
Blocking issue for us too, can't start vm on arm. @dmacvicar there is a chance to push it? Or maybe even just add option to set "bus" param for disk?
Blocking issue for us too, can't start vm on arm. @dmacvicar there is a chance to push it? Or maybe even just add option to set "bus" param for disk?
That option can't be evaluated until we understand if virtio can be used in both cases. There is not reason to make things more complicated or configurable just in case.
Would the example in https://github.com/dmacvicar/terraform-provider-libvirt/pull/895#pullrequestreview-780284017 work in both architectures?
Not sure i got right what you meant. What i tried is to set bus of cdrom to virtio and this option is not supported not in x86 neither in arm arm : error: unsupported configuration: disk type 'virtio' of 'hda' does not support ejectable media error: unsupported configuration: virtio disk cannot have an address of type 'drive'
@tsorya if you look at the example I pasted above, from cloud-init documentation one derives there is no need for a CDROM. Just a disk with the right label, and this disk can be virtio (at least in the example it is).
@dmacvicar adding iso as disk with virtio works. Though it creates inconsistency with x86. IMO same terraform file should work with arm and with x86 and it is nice to have a cdrom too.
Why? x86 works with disk too.
Hello, I can report that it seems the same issue arises not only on aarch64 but on x64 as long as OVMF is involved: apparently there is a detection issue with IDE at first boot and one need to either use SCSI or reboot once to trigger proper commoninit execution.
That may play in favor of exploring virtio solution rather than having to deal with a multiple case/switch sort of situation? I am not too well-versed in go, but I can probably test something if one comes up with a patch.
I ran in to this issue as well and tested with Ubuntu cloudimg (using UEFI).
- "only" using
cloudinit
in thelibvirt_domain
does not work, as it useside
as bus type for the CDROM - using a
disk
pointing to the same .iso file - same result (it automatically creates a CDROM with buside
)- both cases can be "fixed" by patching the bus with xslt to use
sata
- both cases can be "fixed" by patching the bus with xslt to use
However, when using a vfat
disk image with the right label, it works without patching the bus:
imgfile=/var/lib/libirt/images/cidata.img
dd if=/dev/zero of=$imgfile bs=1M count=2
mformat -i $imgfile -v cidata
mcopy -i $imgfile cloud_init.cfg ::user-data
mcopy -i $imgfile network_config.cfg ::network-config
touch meta-data
mcopy -i $imgfile meta-data ::meta-data
mdir -I $imgfile
and reference it as a normal disk
disk {
file = "/var/lib/libvirt/images/cidata.img" # vfat
}