Image resize for fat type of disk does not preallocate additional space
problem
Currently (v.4.20.0.0) I see that sparse and fat disk types result in the same local disk format/size/type. After digging into it I found the following flow for fat disks:
qemu-img create -o preallocation=full -f qcow2 newdisk template_size
qemu-img convert -O qcow2 -o preallocation=full -U --image-opts driver=qcow2,file.filename=template_file newdisk
qemu-img resize newdisk newsize
as a result new disk size is set to newsize, but its actual disk size is just of template_size
To be honest I see no reason to do create at all, it's overwritten by convert.
And to have fat newdisk there should be --preallocation=full in resize command too
qemu-img resize --preallocation=full newdisk newsize
This affects disk performance. In our CI environment CS disk becomes very fragmented, qemu-img map shows some 10K entries.
Also please consider using preallocated raw disks in place of preallocated qcow
Thanks, Alex.
versions
CloudStack 4.20.0.0 Ubuntu 22.04.5 LTS QEMU 1:6.2+dfsg-2ubuntu6.24 libvirt 8.0.0-1ubuntu7.10
The steps to reproduce the bug
- Create VM with large fat disk from small template
- Check qemu-img info to see virtual size is correct according to 1. but disk size is still of original template
What to do about it?
Add --preallocation=full to resize command for fat disk
Use raw instead of qcow
makes sense
added to 4.19.3 milestone. cc @Pearl1594
Looks like preallocation for large disks makes significant load on cpu and may affect other VMs running
so worth doing
nice -n 19 qemu-img
or even choose some less busy core and do
nice -n 19 taskset -c $core qemu-img
Please consider using raw disks in place (or in addition to) qcow preallocated ones
If you just add preallocation then nothing should be changed in VM config but if you want to replace qcow with raw, then indeed VM xml needs update
@akrasnov-drv I've created PR #11986 to add preallocation for resize. The point regarding create not making any difference does make sense in cases, but right I'm not sure if tinkering with that may add some issues in any scenarios.
fixed in #11896