glusterdocs icon indicating copy to clipboard operation
glusterdocs copied to clipboard

Clarify documentation on setting up lvm for gluster snapshots

Open jaloren opened this issue 7 years ago • 14 comments

According to this link:

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

The following sentence exists.

"It is recommended that only one VG must be created from one storage device."

Not only is this grammatically incorrect, its extremely confusing. I think what its trying to say is that a single device should not be in more than one volume group.

According to red hat doc Formatting_and_Mounting_Bricks.. There's the following note:

To avoid performance problems resulting from the sharing of the same thin pool, Red Hat Storage recommends that the LV for each Red Hat Storage brick have a dedicated thin pool of its own. As Red Hat Storage volume snapshots are created, snapshot LVs will get created and share the thin pool with the brick LV.

Assuming the above is correct, then it seems like the sentence from the open source documentation should be replaced with something like the following section:

For performance reasons, we recommend the following LVM layout:

  • a single physical device should not be allocated to more than one volume group.
  • you may have as many physical devices allocated to a single volume group.
  • you should allocate the entire device to the volume group instead of just a partition of that device
  • a thin pool is a type of lvm logical volume that consists of a metadata logical volume and a data logical volume. When setting up LVM for gluster snapshots, each gluster brick must be stored on a thin pool. Only store a single brick on each thin pool. This means you should have a thin pool for each brick.
  • when the metadata LV is created for the thin pool, it should be least 5% of the data LV's disk size.
  • No more than one brick should be placed on a single logical volume.

In addition, the open source documentation says this:

"The device name and the alignment value will vary based on the device you are using."

Many people will have no idea what that means. This redhat documentation does an excellent job of explaining exactly what that means.

So could we either incorporate that information into the open source docs or alternatively, link to the red hat hat docs in that note.

jaloren avatar Mar 02 '17 11:03 jaloren

@rajeshjoseph What do you think about this ?

@jaloren Thanks for reporting this. PRs to fix this are always welcome though :)

prashanthpai avatar Mar 02 '17 12:03 prashanthpai

@prashanthpai I am totally willing to submit a PR. However, before i do that, I would like confirmation from others (preferably someone who actually works on gluster and has some background with disk layout) to confirm that what I said was accurate.

I'd also need to know if its kosher to incorporate content from the red hat doc inside the open source docs. Not sure if there'd be any type of copy right issue there.

jaloren avatar Mar 02 '17 12:03 jaloren

@jaloren Right. I'd prefer to have @rajeshjoseph take a look at this.

prashanthpai avatar Mar 02 '17 12:03 prashanthpai

I have to admit, I was thinking about putting in a request having read it the other day and moving on. Back at the docs today, after googling found this thread. Thanks @jaloren

mikeSGman avatar Jun 10 '17 02:06 mikeSGman

@prashanthpai @jaloren Most of the suggestions look good.

I don't think "you may have as many physical devices allocated to a single volume group." is right. I think we recommend that each physical device gets its own volume group. Need confirmation from snapshot team here.

About the referencing Red Hat docs, it is ok to provide links to Red Hat docs. IIRC, images should not be copied.

raghavendra-talur avatar Jul 24 '17 06:07 raghavendra-talur

@raghavendra-talur if that's true about a one to one relationship between PV and VG, then how should one expand disk for a brick on a server? For example, lets say I have the following set up:

  • device /dev/sdc
  • /dev/sdc is a physical volume
  • /dev/sdc is 100 GB.
  • /dev/sdc is in the volume group named VolGrp1
  • In VolGrp1, there is a single logical thinpool volume: LV1
  • The brick directory is mounted on LV1

Now given the above, lets says I want to add 100 GB of disk to the brick directory. Here's what I want to do:

  1. add device /dev/sdd to the VM.
  2. create a physical volume of /dev/sdd
  3. add /dev/sdd to VolGrp1
  4. lvextend the thinpool LV named LV1
  5. xfs_growfs on the brick directory.

Based on your comment this would not be recommended. Given that, how would i add disk to the LV thin pool that backs the brick directory?

jaloren avatar Aug 08 '17 10:08 jaloren

The documentation doesn't specify why LVM usage is recommanded over the Gluster's volumes.

Is it absolutely neccessary ? Can't we use an XFS partition directly ? For example, on a dedicated server with disks which will never change of size, why use LVM with gluster ?

quentin-lpt avatar Oct 10 '17 16:10 quentin-lpt

The documentation doesn't specify why LVM usage is recommanded over the Gluster's volumes.

LVM is required only if you intend to use snapshot feature.

Is it absolutely neccessary ? Can't we use an XFS partition directly ? For example, on a dedicated server with disks which will never change of size, why use LVM with gluster ?

Certainly you can.

prashanthpai avatar Oct 10 '17 16:10 prashanthpai

Is there work to be undertaken on this issue? Is it complete so that it can be closed?

sankarshanmukhopadhyay avatar Aug 07 '18 01:08 sankarshanmukhopadhyay

Besides the issue mentioned above this part of the documentation is not properly formatted in readthedocs. Opening this page in github reveals a clear structure using numbered lists. However, the numbered lists are not shown correctly in readthedocs where the numbered list is reset after each item leading to multiple items with number 1.

LukasK13 avatar Nov 23 '20 15:11 LukasK13

Besides the issue mentioned above this part of the documentation is not properly formatted in readthedocs. Opening this page in github reveals a clear structure using numbered lists. However, the numbered lists are not shown correctly in readthedocs where the numbered list is reset after each item leading to multiple items with number 1.

Please confirm if the issue seen in the below link

https://docs.gluster.org/en/latest/Administrator-Guide/Setting-Up-Volumes/

aravindavk avatar Nov 24 '20 04:11 aravindavk

Sorry for being not clear. I'm talking about the section formatting and mounting bricks.

LukasK13 avatar Nov 24 '20 08:11 LukasK13

Sorry for being not clear. I'm talking about the section formatting and mounting bricks.

Thanks. Sent PR https://github.com/gluster/glusterdocs/pull/616

aravindavk avatar Nov 24 '20 09:11 aravindavk

Hi, as I just come across the "formatting and mounting bricks" wiki page and was kinda confused by it I'd like to share some of the problems and questions I faced:

  1. It is unclear if and why LVM is required/recommended. I'm not certain I understood the one provided correctly. It says to only have one logical volume in the entire volume group. And to have dedicated physical disks as physical volume (so not having it partitioned and therefore also to not using a 2nd partition for another volume group).
  2. Depending on the answer to 1, would it be possible to add a diagram of how to design the LVM?
  3. Is it ok to use an LVM RAID5 (6) instead of choosing a dispersed type in gluster?
  4. If yes, does this also apply if it is combined with either a distributed, replicated and/or distributed replicated gluster volume? This would be something that I'm (currently) especially interested in, as gluster itself does not support a distributed dispersed replicated volume.
  5. And technically only partly related to formatting and mounting bricks, can the volume type be changed without data loss later on? And what's the "scale from zero" approach, as replicated has a minimum of 3 servers because of split brain (Or is this a technical minimum of one, so that I could deploy it single server and only would have to jump in odd increments later on?).

agowa avatar Oct 23 '23 11:10 agowa