containerized-data-importer
containerized-data-importer copied to clipboard
Virtual image size is larger than the reported available storage
What happened: I tried to upload .iso image via CDI to a PVC. The iso image is nearly 5GB. It always throws an error saying the virtual image is larger than the reported available storage. I have tried multiple PVC sizes: 12Gi, 64Gi, 120Gi but it is not working.
What you expected to happen: The iso image should be successfully uploaded to PVC
How to reproduce it (as minimally and precisely as possible): Following command: kubectl virt image-upload pvc win-2022-std-iso --size=120Gi --image-path=win22-std.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.49.172.185:31876 --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind PVC default/win-2022-std-iso not found PersistentVolumeClaim default/win-2022-std-iso created Waiting for PVC win-2022-std-iso upload pod to be ready... Pod now ready Uploading data to https://10.49.172.185:31876
4.67 GiB / 4.67 GiB [==========================================================================================================================================================] 100.00% 2m13s
unexpected return value 400, Saving stream failed: Virtual image size 12886302720 is larger than the reported available storage 12884901888. A larger PVC is required.
Additional context: Add any other context about the problem here.
Environment:
- CDI version (use
kubectl get deployments cdi-deployment -o yaml
): 1.58.3 - Kubernetes version (use
kubectl version
): 1.29.0 - DV specification: N/A
- Cloud provider or hardware configuration: kind
- OS (e.g. from /etc/os-release): Rocky Linux 9.3
- Kernel (e.g.
uname -a
): 5.14.0-362.24.1.el9_3.x86_64 - Install tools: kubectl virt
- Others: N/A
Definitely weird - the error indicates that only 12Gi is available in the volume (even though you asked for 120)
Is it possible that the storage provisioner (--storage-class=datavg-thin-pool
) is providing a volume smaller than the request?
(meaning, there is no 120Gi available in the pool, but the volume creation still goes through)
You could test this by creating a 120Gi PVC and a pod mounting it; Then run something like
bash-5.1# stat /pvcmountpath/ -fc %a
907
bash-5.1# stat /pvcmountpath/ -fc %f
974
(%a - available (what we care about), %f - total free)
To get total size you would multiply by the block size (stat /pvcmountpath/ -fc %s
)
Hi @akalenyu
Thanks for taking time looking into my issue.
I have tested by creating a PVC of 128Gi
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
slmitswinp1-pvc Bound pvc-36655653-b079-4872-b20f-e64bf8a5ae50 128Gi RWX datavg-thin-pool
It looks like the size of my pvc is only 4096 bytes? Strange thing is that when I list the content of the disk, there is already a disk.img file? [root@k8s-master-01 ~]# kubectl exec pods/nginx -- ls -larth /var/www/html total 28K drwx------. 2 root root 16K Apr 1 11:19 lost+found drwxr-xr-x. 3 root root 4.0K Apr 1 11:20 . -rw-r--r--. 1 107 107 119G Apr 1 11:24 disk.img drwxr-xr-x. 3 root root 18 Apr 1 11:32 ..
Actually, the size of your PVC is around 120Gi - 31068944 * 4096
(available blocks * block size)
It is definitely weird that a disk.img is already existing there, unless you tried the upload before creating the nginx pod
Maybe this case is similar to the one described here https://issues.redhat.com/browse/CNV-36769? If a first upload attempt failed for some unrelated reason (maybe the upload pod was force-deleted), then in subsequent retries the original img.disk will be there occupying space and preventing the upload from succeeding, as @akalenyu suggested.
I made some progress with this issue by using --volume-mode=FileSystem. This option works. If the PVC is block volume, it doesn't work. I don't really have an explaination for this. Even if I delete the block PVC completely and create a new one, it always shows larger size PVC needed message.
I made some progress with this issue by using --volume-mode=FileSystem. This option works. If the PVC is block volume, it doesn't work. I don't really have an explaination for this. Even if I delete the block PVC completely and create a new one, it always shows larger size PVC needed message.
ah okay if this is block, you could repeat the nginx experiment but instead use blockdev --getsize64 /dev/pvcdevicepath
So I am wondering, which version of virtctl plugin do you have. I see you are creating a PVC
with kubectl virt image-upload pvc win-2022-std-iso --size=120Gi --image-path=win22-std.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.49.172.185:31876/ --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind
instead of a dv
which should generate a message about not using datavolumes IIRC, and I don't see that
Hi @awels Below is the version info. I installed virt using krew [root@k8s-master-01 ~]# kubectl virt version Client Version: version.Info{GitVersion:"v1.2.0", GitCommit:"f26e45d99ac35743fc33d6a121b629e9a9af6b63", GitTreeState:"clean", BuildDate:"2024-03-05T20:34:24Z", GoVersion:"go1.21.5 X:nocoverageredesign", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{GitVersion:"v1.2.0", GitCommit:"f26e45d99ac35743fc33d6a121b629e9a9af6b63", GitTreeState:"clean", BuildDate:"2024-03-05T21:32:21Z", GoVersion:"go1.21.5 X:nocoverageredesign", Compiler:"gc", Platform:"linux/amd64"}
I tried another upload with virtio-win.iso which is about 600MB. Both options block or FileSystem PVC work. kubectl virt image-upload pvc virtio-win-iso-test --size=4Gi --image-path=/root/virtio-win.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.248.83.131:443 --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind PVC default/virtio-win-iso-test not found PersistentVolumeClaim default/virtio-win-iso-test created Waiting for PVC virtio-win-iso-test upload pod to be ready... Pod now ready Uploading data to https://10.248.83.131:443
598.45 MiB / 598.45 MiB [========================================================================================================================================================] 100.00% 16s
Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress Processing completed successfully Uploading /root/virtio-win.iso completed successfully
Interesting can you try Alex's suggestion of using blockdev
to see the size of the device properly, in both the smaller case and larger case, maybe there is some overhead we are not aware of?
Interestingly enough. Now it works with windows iso as well which didn't work before. I can not reproduce the issue now unfortunately. The storage backend I am using is Linstor. kubectl virt image-upload pvc nginx-test-iso --size=64Gi --image-path=/root/win22-std.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.248.83.131:443 --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind PVC default/nginx-test-iso not found PersistentVolumeClaim default/nginx-test-iso created Waiting for PVC nginx-test-iso upload pod to be ready... Pod now ready Uploading data to https://10.248.83.131:443
4.67 GiB / 4.67 GiB [==========================================================================================================================================================] 100.00% 2m11s
Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress Processing completed successfully Uploading /root/win22-std.iso completed successfully
Hmm maybe sometimes the linstor csi driver messes up the size calc? iirc you were doing --size=120Gi
before, now you're using 64
I've been trying multiple sizes before, 40Gi, 64Gi, 120Gi. None of it worked. I will try out Portworx at a later time to see if it is more stable. Thank you for all of your help in this issue.
@kvaps Do you have any insight as to what might be happening with LinStore here?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
I've been trying multiple sizes before, 40Gi, 64Gi, 120Gi. None of it worked. I will try out Portworx at a later time to see if it is more stable. Thank you for all of your help in this issue.
Hey, did you get a chance to try this with a different provisioner?
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.