containerized-data-importer
containerized-data-importer copied to clipboard
Dynamically resize virtual disk image based on size of cloned PVC
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind enhancement
What happened:
A VM started from a larger PVC (source: pvc) that was cloned from a smaller "original" PVC (source: http) doesn't see the increased space on the root filesystem.
What you expected to happen:
VM has X+Y space on the root drive when started from a PVC of size X+Y.
How to reproduce it (as minimally and precisely as possible):
Download OS onto PVC of size X. Clone datavolume using source: pvc to new PVC of size X + Y. Start VM from new PVC. Run lsblk
Anything else we need to know?:
This is useful for dynamic VM flavours. The use case is an administrator creates a golden PVC of minimal size, and users can clone the PVC with a clone size >= golden size giving their VMs root filesystem as much or as little space as they require.
Environment:
- CDI version (use
kubectl get deployments cdi-deployment -o yaml
): v1.9.5 - Kubernetes version (use
kubectl version
): 1.14.0 - Cloud provider or hardware configuration: Bare metal
- Install tools: kubeadm
- Others:
We currently do not support resizing the guest file system.
Doesn't the guest file system get resized in the initial datavolume creation? I see this in my logs:
I0820 22:53:13.088553 1 data-processor.go:165] New phase: Resize
W0820 22:53:19.008884 1 data-processor.go:222] Available space less than requested size, resizing image to available space 15660400640.
I0820 22:53:19.009145 1 data-processor.go:228] Expanding image size to: 15660400640
Could this logic be reused when cloning PVCs?
That is actually the raw disk being resized.
So if your original image has a 5G partition on a 5G raw disk, and you create a new DV with size 10G, the raw disk is resized to 10G, but the partition stays at 5G. The user has to resize the appropriate partition afterwards.
There are a lot of complications with attempting to resize the guest partitions. If you have multiple partitions which one do we resize for instance. There are tools where for a limited set of guest OSes they can resize the guest partitions, but we haven't gotten around to properly defining the behavior and API for doing that.
Closed, that explanation makes sense. Is there any way to provide a more traditional VM experience where I can specify the size of my desired root filesystem while taking advantage of CDI/snapshotting/copy-on-write (and not having to redownload the image to a bigger original datavolume)?
Currently unfortunately not. One thing you could try is have an init script when you start the VM the first time that resizes the root partition for you. We would like to enhance the experience in the future to do that for some known OSes. And there are tools (virt-resize for instance) that allow us to do so. But we would have to design and implement the API for it.
Slightly tangential from the original issue, but even after rebooting and rescanning the SCSI device the VM insists that the the disk is ~15Gi
in size instead of the PVC size of 20Gi
. Creating a custom init script to resize the root partition is a totally acceptable solution for me, but I can't seem to get the VM to recognize the increased space in the PVC even if I try it manually.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
clear Bound pvc-5852c88b-aef0-4709-804a-6f479d6cd1ce 20Gi RWO csi-rbd 16m
clear-template Bound pvc-d0c06eda-9e63-4fd8-9d89-e33acfbf164b 15Gi RWO csi-rbd 43h
$ ssh root@clearvm -p 34333
root@clearvm~ # fdisk -l
Disk /dev/vda: 14.6 GiB, 15660417024 bytes, 30586752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disklabel type: gpt
Disk identifier: CDDE74C6-467A-4E08-A569-15F69642E948
Device Start End Sectors Size Type
/dev/vda1 2048 131071 129024 63M EFI System
/dev/vda2 131072 30586718 30455647 14.5G Linux root (x86-64)
Disk /dev/sda: 374 KiB, 382976 bytes, 748 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
root@clearvm~ # echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan
root@clearvm~ # fdisk -l
Disk /dev/vda: 14.6 GiB, 15660417024 bytes, 30586752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
Disklabel type: gpt
Disk identifier: CDDE74C6-467A-4E08-A569-15F69642E948
Device Start End Sectors Size Type
/dev/vda1 2048 131071 129024 63M EFI System
/dev/vda2 131072 30586718 30455647 14.5G Linux root (x86-64)
Disk /dev/sda: 374 KiB, 382976 bytes, 748 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1048576 bytes
So we don't actually resize the virtual disk itself after cloning. So I guess doing and RFE to get that added (should be fairly simple) makes sense.
So the reason is, we can clone any kind of PVC, not just disk images. So if we clone a non disk image, resizing makes no sense. I had to look up the reasoning why we didn't resize after clone.
Re-opening to resize after cloning a virtual disk.
Note this works already if you are using block volume mode because we simply copy the bits
Resizing should probably not be done implicitly during cloning because you may not be cloning a VM disk. Also we need to consider the smart clone case where we would not be spawning any clone pods. The proper solution for this is probably to have a Job that could be scheduled to run against the cloned PVC once it has been prepared.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
@awels Any word on this issue? Even when cloning a block DataVolume the VM still cannot see the additional space.
So with a block device it should see everything (well at least fdisk should see it), we don't resize the guest partitions. We haven't gotten guest partition resize to the top of the priority list yet. But its in there.
@awels in my workflow, I have template DVs/PVCs (rhel, windows etc) which I use as a source for new VMs via DataVolumeTemplate which works well. I can even specify a larger storage size. For example original DV/PVC is 10gb, I would like to expand the disk to 40GB which I'm able to do but the disk.img retains the original size which needs to be expanded via qemu-img.
Would it be possible to detect during the cloning process that it's a qcow image and the destination disk size is larger than the size of the disk. Inside the OS, partition size can be adjusted via cloud-init/sysprep etc.
It would be great to have this feature for sure, otherwise one would have to create multiple size of source disk images or find another mechanism to automate PVCs.
Unfortunately cloning is a bit of a problem as we have several ways of cloning, depending on your underlying storage. If the storage supports snapshots for instance, we use snapshot cloning to create the clone as that is much faster than a host assisted clone which copies data from pod to pod.
We have some thoughts about how to work this with clone, but no time yet to implement it.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /remove-lifecycle rotten /lifecycle frozen
@awels: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten /lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
https://github.com/kubevirt/kubevirt/pull/5981 includes this request, as well as support for doing it with a running VM.
Mentioned already but https://libguestfs.org/virt-resize.1.html
@iExalt There has been some new functionality added since this issue has been open. Could you try out this flow and let us know if everything works as expected for you?
I haven't had a chance to play around with KubeVirt in quite some time now unfortunately :( but I'm excited to test drive the functionality when I get back into the flow!
kubevirt/kubevirt#5981 includes this request, as well as support for doing it with a running VM.
Thanks! By enabling the feature gate ExpandDisks
newly spawned VM's are resized to the PVC size by default and the partitions of already running VM's can be resized by hand to the size of the PVC :)
@MaisenbacherD @iExalt I think we can safely close this issue since its main topic has been addressed. Feel free to reopen if you consider it necessary. Thanks!