kismatic icon indicating copy to clipboard operation
kismatic copied to clipboard

KET storage volumes with non-root mounts.

Open TimWoolford opened this issue 7 years ago • 12 comments

We've setup 3 storage nodes on our cluster(s) using KET.

Our VMs come with 50Gb partition for OS mounted on /, and we've mounted an additional 1Tb disk to /data which is where KET stores the bricks.

This works fine for the default 10Gb storage, but when we start adding more for say a prometheus PVC, KET checks the / volume for space and there is less than the 100Gb we're trying to allocate although /data has 950Gb free. KET fails the process.

There doesn't seem to be a way to change the volume_mount (hard coded in ClusterCatalog) or volume_base_dir set in all.yaml

Should KET be able to create the XFS filesystems as prescribed in the same way as the docker direct-lvm is created?

Our work-around for the moment is to use KET to create the volumes at a size that will pass the validation (I can't find a way to skip the validation). And then increase the size of the gluster volume afterwards.

TimWoolford avatar Jun 21 '17 13:06 TimWoolford

Sounds like we are looking at the wrong location for available space, and we should be looking at /data instead.

alexbrand avatar Jun 22 '17 12:06 alexbrand

@TimWoolford Can you elaborate on the creation of the XFS filesystem?

alexbrand avatar Jun 22 '17 13:06 alexbrand

In our case /data is the mount, and will appear in the ansible_mounts list for available space validation. In a "vanilla" single device setup, then the mount would be /.

Perhaps KET could work backwards up the path until a mount point is found, or maybe it would be simpler to get available space using something like df -k {{ volume_mount }}{{ volume_base_dir }} | awk 'NR==2{print$4}'

For the XFS, I was thinking that the creation process was very much like how KET creates the docker direct_lvm.

KET is given the block device for storage ( eg /dev/sdb)

  • create physical volume
  • create logical volume
  • create lv group
  • convert to thin-pool

then format XFS and mount the partition.

KET would then own the exact mount point if a separate block device is provided.

TimWoolford avatar Jun 22 '17 13:06 TimWoolford

Hi, This is a guess on my part but does the fact that a variable appears to be hard coded with the strings "/" have something to do with it? I had the same problem and had to modify role for add_volume to make this work.

in /pkg/install/execute.go line 433

cc.VolumeQuotaBytes = volume.SizeGB * (1 << (10 * 3)) cc.VolumeMount = "/"

    // Allow nodes and pods to access volumes

tsikorski avatar Jun 26 '17 21:06 tsikorski

Hi, i noticed, we can define the mount point in "group_vars/all.yaml" file by modifiy volume_mount variable to the required mount point. Shahid

shahidrasool avatar Jul 24 '17 16:07 shahidrasool

Has there been any progress on a patch for this? It seems as though a "production" setup would avoid storing persistent volumes to /.

I'm trying to adopt this as my k8s setup tool, but without this it is a no-go.

jlmeeker avatar Nov 30 '17 22:11 jlmeeker

@jlmeeker The data blocks are stored to {volume_mount}/{volume_base_dir}, which by default is /data. The available space check is the issue being described here, as it looks at volume_mount. See the add-volume task in kismatic/ansible/roles/volume-add/tasks/main.yaml for reference.

In the scenario @TimWoolford is describing, we can configure volume_mount in the group_vars/all.yaml to be the actual mount path, e.g.

volume_mount: /data
volume_base_dir: /

This would create blocks in /data, and also check the size appropriately, though this is a workaround.

The recommendation would be to use a sub-directory for the base, e.g.

volume_mount: /k8s-mount-1
volume_base_dir: data/

And blocks would be stored to /k8s-mount-1/data, and the available space check would be against /k8s-mount-1.

Hope this helps.

jaycoon avatar Nov 30 '17 22:11 jaycoon

@TimWoolford See comment above. You can set your volume_mount path appropriately in all.yaml and this will check the size correctly.

Setting up the volume to mount to /data is not recommended, but even still you can work around this on the existing setup by specifying / as the base dir.

jaycoon avatar Nov 30 '17 22:11 jaycoon

Seems like I set that and it didn't behave any differently.... Is there something I need to run between editing that file and the volume add command?

jlmeeker avatar Dec 01 '17 00:12 jlmeeker

@jlmeeker Are you a member of the Kismatic Slack group? We can help troubleshoot better there. http://slack.kismatic.com/

jaycoon avatar Dec 01 '17 11:12 jaycoon

Wanted to bump this as I've also hit the same problem. The fix mentioned here (https://github.com/apprenda/kismatic/issues/598#issuecomment-348345390) doesn't appear to work.

cadm-deprez avatar Jan 15 '18 10:01 cadm-deprez

Issue still exists. Any plans to resolve it? Storing data in / volume is not how it mentions to be. It seems changing ansible/playbooks/group_vars/all.yaml to

# Gluster
volume_mount: /data
volume_base_dir: /

doesn't help as volume_mount=/ hardcoded in /pkg/install/execute.go as stated above.

The only workaround I found is changing line 16 in ansible/playbooks/roles/volume-add/tasks/main.yaml to: when: "{{ item.mount == '/data' and item.size_available > volume_quota_bytes|float }}" (considering that gluster LVM volume mounted under /data)

savealive avatar Jun 25 '18 22:06 savealive