[Question]: Is it possible to create a striped volume from multiple disk images?
Is your question not already answered in the FAQ?
- [X] I made sure the question is not listed in the FAQ.
Is this a general question and not a technical issue?
- [X] I am sure my question is not about a technical issue.
Question
I recently setup vDSM on my Unraid machine. It’s running great so I don’t think I’m having any issues, more so that I might be mis understanding something.
I have 6 drives in my Unraid array. So I created 6 qcow2 images and placed one on each disk. The plan was to just create a single striped volume in DSM. And let Unraid handle parity.
Well when I booted back into vDSM only 4 of the disks showed up (that could be an error on my part), but Looking at storage manager each of the 4 detected images are their own volume, and I have no way to make a single volume from them.
Is this expected or did I miss something?
Would I be better off converting some of my Unraid disks to a ZFS pool and just passing one big disk image to DSM?
Hello,
If you look at the source code related to disks: https://github.com/vdsm/virtual-dsm/blob/master/src/disk.sh
The limit programmed by the author is 4, so they are within the programmed. ONLY 4.
Regarding using only one volume. Virtual dsm is not the same as the full DiskStation Manager, it was made to be used exactly that way.
In this case it is better to manage this in unraid and pass only one image to virtual dsm.
Got it, that makes sense. I never used Virtual DSM on my real Synology so I wasn't aware of the differences.
For us Unraid users, is there a way to create a multi-part qcow2 image that can be spread across multiple disks, and then virtually re-combined and passed to the vDSM container as a "single" image?
Unraid is drive pooling not actual raid, so the largest single file can only be as big as the free space of a single disk. ZFS is an option now which would work great for vDSM. However I don't know how well ZFS will play with SMR drives. I Think BTRFS raid is an option, but I've only ever used it with SSDs, so I don't actually know how it would work with a large number of HDDs.
@relink2013
Unfortunately, I don't know how unraid works. I only tested it once, so I don't know how it works.
But the way to do it is this: create a physical or virtual raid of your drives with all the free space and generate only one image (.img/.qcow2) for vdsm.
Another option is to use xpenology, but I think this is more interesting for use on bare metal.
For docker/VM, I personally prefer vdsm. It has many advantages.
I decided to go for the fun option. I nuked my entire Unraid array and setup a new storage pool using ZFS.
So now my vDSM has 2 volumes.
-
Volume 1: DSM itself, databases, index, snapshots, etc... This volume is a 1TB image on a mirrored nvme zpool.
-
Volume 2: Actual data storage. 30TB qcow2 image on a mirrored zpool of 6 groups of 2 devices. (This is actually Volume 8, which is incredibly annoying. I fixed it once already to be "volume 2" but it reverted after restarting.)
Combined with the 10Gbe NIC I have in there and I literally don't have enough devices to seven see if I can saturate my network or not...but it sure is fast. lol
I'm glad it worked. 10 GbE networking is very good.
About the volumes, have you tried this guide? https://github.com/vdsm/virtual-dsm/issues/763
Here I do this and they remain, it works fine, as long as there is no removal/addition of virtual disks.
Here I use VDSM directly through VM and not through docker.