operator
operator copied to clipboard
VMStroage pod can't start when an additional volume mount to the storageDataPath
For example, I have following VMCluster CR
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMCluster
metadata:
name: vmcluster
spec:
vmstorage:
storageDataPath: "/storage"
volumes:
- hostPath:
path: /data/victoria/storage
type: ""
name: vmstorage-volume
volumeMounts:
- mountPath: /storage
name: vmstorage-volume
...
I just take hostPath as an example here, it cloud be other kind of volume type like cephfs or nfs. The point is I want vmstorage write data to the volume I mounted, not a PVC or emptydir defined in VMCluster.Spec.VMStorage.Storage. But today VM Operator will create a emptydir as a default behavior and mount it to the path defined in storageDataPath. Which in this case, will cause a conflict and vmstorage pod can't start with following errors.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 22s (x16 over 3m5s) statefulset-controller create Pod vmstorage-vmcluster-0 in StatefulSet vmstorage-vmcluster failed error: Pod "vmstorage-vmcluster-0" is invalid: spec.containers[0].volumeMounts[1].mountPath: Invalid value: "/storage": must be unique
The pod will have 2 volumeMount in the same path.
containers:
- name: vmstorage
......
volumeMounts:
- mountPath: /storage
name: vmstorage-db
- mountPath: /storage
name: vmstorage-volume
volumes:
- hostPath:
path: /data/victoria/storage
type: ""
name: vmstorage-volume
- emptyDir: {}
name: vmstorage-db
My proposal is to add a check before appending the default VolumeMount to StorageDataPath if there's already an additional VolumeMount's mountPath equals the StorageDataPath, and skip the default VolumeMount if it's true. I can contribute the code if the proposal is accepted.
Hello!
Thanks for the detail report, and yes, the behavior should be improved.
But I think it's little hacky to check if one of themountPath is equal with the StorageDataPath. I think we can extend the current StorageSpec, make it support more VolumeSource like native Volume. In that way, StorageSpec can still remain independent and serves StorageDataPath better with feasible default behavior.
wdyt @linjunzhe @f41gh7
Hello! Thanks for the detail report, and yes, the behavior should be improved. But I think it's little hacky to check if one of the
mountPathis equal with theStorageDataPath. I think we can extend the currentStorageSpec, make it support more VolumeSource like native Volume. In that way,StorageSpeccan still remain independent and servesStorageDataPathbetter with feasible default behavior. wdyt @linjunzhe @f41gh7
I think, the correct approach to validate user input and throw an error in this case ( /storage is reserved mount and cannot be used for volume mounts). It also a case for any other reserved mount names.
vmstorage doesn't support read-write many option and since all pod have the same configuration, it'll be a conflict with lock file.
Proper solution for this case, create a storage provision (like https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) and use it for claimTemplate definition.
Provisioner must create subfolders for cluster pod and it'll not cause any conflicts with storage access.