moosefs-csi
moosefs-csi copied to clipboard
sourceMountPath or subPath(Expr) for volumes
Hi,
I have an issue to integrate moosefs-csi in StatefulSet. I need a way to determine the source path to mount. I already know I can make a subPath under volumeMounts such as:
kind: Pod
apiVersion: v1
metadata:
name: my-csi-app
spec:
containers:
- name: my-frontend
image: busybox
volumeMounts:
- mountPath: "/data"
name: moosefs-volume
subPath: subdir # <--------- here
command: [ "sleep", "1000000" ]
volumes:
- name: moosefs-volume
persistentVolumeClaim:
claimName: moosefs-csi-pvc
but I need to do this under Volume:
kind: Pod
apiVersion: v1
metadata:
name: my-csi-app
spec:
containers:
- name: my-frontend
image: busybox
volumeMounts:
- mountPath: "/data"
name: moosefs-volume
command: [ "sleep", "1000000" ]
volumes:
- name: moosefs-volume
persistentVolumeClaim:
claimName: moosefs-csi-pvc
subPath: subdir # <--------- here
Even if it looks the same, it is fundamentally different. I need to be able to choose which directory in the source I want to mount, with the ability to add a POD_NAME in case of StatefulSet. My use case is OpenLdap, that has a data persistence in /var/lib/openldap. I cannot mount /var/lib/openldap/$(POD_NAME) or I would need to modify the OpenLdap configuration to refer to a dynamic path.
Eg of what I want for my usecase:
kind: Pod
apiVersion: v1
metadata:
name: my-csi-app
spec:
containers:
- name: my-frontend
image: busybox
volumeMounts:
- mountPath: "/data"
name: moosefs-volume
command: [ "sleep", "1000000" ]
env:
- name: POD_NAME # <--------- here
valueFrom:
fieldRef:
fieldPath: metadata.name
volumes:
- name: moosefs-volume
persistentVolumeClaim:
claimName: moosefs-csi-pvc
subPathExpr: $(POD_NAME) # <--------- here
Not sure if this is a MooseFs plugin issue or Kubernetes issue. I created an issue in Kubernetes side: https://github.com/kubernetes/kubernetes/issues/89091 . But I still believe this is a MooseFs plugin issue. The sourcePath in the code is set to the endpoint of mfsmaster, but it could be configurable.
The way to accomplish this is somewhat of a hack. First, manually create the directory in your moosefs cluster (this plugin doesn't create directories, yet). Then, define the volume pointing at that directory (reference the volume in the pvc definition):
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: moosefs-pv
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: moosefs-block-storage
volumeMode: Filesystem
capacity:
storage: 5Gi
storageClassName: moosefs-block-storage
csi:
driver: com.tuxera.csi.moosefs
fsType: ext4
volumeHandle: moosefs-pv
volumeAttributes:
endpoint: '<master endpoint>:/<directory path>'
...
This will mount the directory from within the moosefs cluster as the root of the volume in kubernetes.
Now you can point the claim at the volume you created:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: moosefs-csi-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: moosefs-block-storage
volumeMode: Filesystem
volumeName: moosefs-pv
It's enough to get you started. I'm working on a patch to add the directory path as an entry on VolumeAttributes, similar to how the csi plugin for nfs storage allows setting of the share to mount.
Hi, is this going to be added soon? Also, I can see "ext4", should we write the filesystem containing MooseFs chunks or do we always need to write that? Thank you.
@antoinetran Sorry to have disappeared. Didn't expect for the world to catch on fire all at once. :sob:
I spent a lot of time trying to refactor and update, but the aws driver gave me problems the entire way.
Ended up creating an on-prem, focused driver, so I can iterate on it more cleanly and so I don't need to spend time studying up on the goals of the aws target here. github.com/Kunde21/moosefs-csi