gdeploy
gdeploy copied to clipboard
[RFE] Provision gluster bricks using blivet (and/or) libstoragemgmt
gdeploy should be able to provision gluster bricks, using tools like: Blivet, libstoragemanager. While provisioning following best practices from gluster admin guide has to be considered.
Summary of steps involved in creating a gluster brick from a disk
LVM layer:
-
Physical Volume creation:
$ pvcreate --dataalignment <alignment_value> <disk>
where alignment_value : - For JBODS: 256k - For H/W RAID: RAID stripe unit size * Nos of data disks (nos of data disks depends upon the raid type)
-
Volume Group creation:
-
For RAID disks:
$ vgcreate --physicalextentsize <extent_size> VOLGROUP <physical_volume>
where extent_size = RAID stripe unit size * Nos of data disks (nos of data disks depends upon the raid type)
-
For JBODS:
$ vgcreate VOLGROUP <physical_volume>
-
-
Thin Pool creation:
$ lvcreate --thinpool VOLGROUP/thin_pool --size <pool_size> --chunksize <chunk_size> --poolmetadatasize <meta_size> --zero n
Where: - meta_size: 16 GiB recomended, if its a concern atleast 0.5% of pool_size - chunk_size: i. For JBOD: use a thin pool chunk size of 256 KiB. ii. For RAID 6: stripe_unit size * number of data disks must be B/w 1Mib and 2Mib (preferably close to 1) iii. For RAID 10: thin pool chunk size of 256 KiB
NOTE: if we need multiple bricks on a single H/w device then create multiple Thin pools from a single VG.
-
Thin LV creation:
$ vcreate --thin --name LV_name --virtualsize LV_size VOLGROUP/thin_pool
XFS Layer:
- Formatting filesystem on the disk:
- XFS Inode Size: 512 bytes
- XFS RAID Alignment:
-
For RAID 6: SU=
SW=number of data disks. Example :
$ mkfs.xfs other_options -d su=128k,sw=10 device_name
-
For RAID 10 and JBODS: this can be omitted default is fine
-
- Logical Block Size for the Directory:
-
For all types: default is 4k for better performance have a greater value like 8192 use "-n sixe=
" for setting this. Example :
$ mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10 logical volume meta-data=/dev/mapper/gluster-brick1 isize=512 agcount=32, agsize=37748736 blks
-
- Mounting the filesystem
- Allocation Strategy: default is inode32 but inode64 is recommended set it by using "-o inode64" during mount
- Access Time:
If the application does not require to update the access time on files, than file system must always be mounted with noatime mount option
Example :
$ mount -t xfs -o inode64,noatime <logical volume> <mount point>
- Allocation groups: Default is fine
- Percentage of space allocation to inodes: If the workload is very small files (average file size is less than 10 KB ), then it is recommended to set maxpct value to 10, while formatting the file system
It would be great if gdeploy could take free disk name as input from user and provision that to a brick that can be used by gluster volume, automatically figuring out the details of the disk and provisioning it as per the best practices using above mentioned tools.
@nnDarshan will blivet/libstoragemgmt be available in RHEL7 and RHEL6 default install? Because without that it is going to be difficult for gdeploy to be used for brick management.
We can do this only after these technologies are are available in base RHEL release.
Hos is this change related to https://github.com/Tendrl/documentation/issues/49?