cvmfs-csi icon indicating copy to clipboard operation
cvmfs-csi copied to clipboard

support for nomad

Open shumin1027 opened this issue 2 years ago • 10 comments
trafficstars

I try to use cvmfs through CSI Plugin in nomad, but I encounter problems when creating volume it seems that the access_mode parameter configured in nomad is not supported

here is my volume config file:

type = "csi"
id   = "cvmfs-volume"
name = "cvmfs-volume"

plugin_id = "cvmfs0"

capability {
  access_mode     = "multi-node-reader-only"
  attachment_mode = "file-system"
}

mount_options {
  fs_type = "cvmfs2"
}

secrets {}

the error info:

root@ubuntu:~/nomad-jobs# nomad volume create cvmfs.volume.hcl 
Error creating volume: Unexpected response code: 500 (1 error occurred:
        * controller create volume: CSI.ControllerCreateVolume: volume "cvmfs-volume" snapshot source &{"" ""} is not compatible with these parameters: rpc error: code = InvalidArgument desc = volume accessibility requirements are not supported)

Can you provide an example of using cvmfs csi in nomad? Ref: https://github.com/cvmfs-contrib/cvmfs-csi/issues/51

shumin1027 avatar Dec 05 '22 08:12 shumin1027

Hi @shumin1027. It seems this error originates from having non-nil AccessibilityRequirements (i.e. topology) in CreateVolumeRequest rather than AccessMode.

https://github.com/cvmfs-contrib/cvmfs-csi/blob/18097c80de75c7325c910d53086cd76ef9aa797f/internal/cvmfs/controller/csiserver.go#L205-L207

I don't have Nomad environment at hand so I cannot test this. Can you pass logs from the controller plugin (running with -v=5 verbosity level) to see what Nomad's CSI client is passing the driver? Is there a way to pass nil topology requirements when creating a volume?

gman0 avatar Dec 05 '22 10:12 gman0

Another way to do this is to create the volume manually (similar to how you would manually create a PersistentVolume and its PersistentVolumeClaim in Kubernetes). Is this possible in Nomand? That way you would circumvent the provisioning stage.

You can see a Kubernetes example for this here: https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/example/volume-pv-pvc.yaml

gman0 avatar Dec 05 '22 10:12 gman0

@gman0
The example given above is to manually create volume

nomad volume create cvmfs.volume.hcl 

This is the log output by the controller plugin when creating a volume manually:

I1205 13:47:16.773803       1 grpcserver.go:136] Call-ID 3443: Call: /csi.v1.Controller/CreateVolume
I1205 13:47:16.774096       1 grpcserver.go:137] Call-ID 3443: Request: {"accessibility_requirements":{},"name":"cvmfs-volume","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"cvmfs2"}},"access_mode":{"mode":3}}]}
E1205 13:47:16.774136       1 grpcserver.go:141] Call-ID 3443: Error: rpc error: code = InvalidArgument desc = volume accessibility requirements are not supported

shumin1027 avatar Dec 05 '22 13:12 shumin1027

What I meant is to just register an existing volume without actually triggering CreateVolume call on the driver. I'm not familiar with Nomad so I'm not sure if it's possible.

The volume itself is "virtual", it's just a reference for cvmfs-csi.

gman0 avatar Dec 05 '22 17:12 gman0

Relaxing validation and letting it pass on "accessibility_requirements":{} doesn't seem too unreasonable -- we may include this change in the next point release.

gman0 avatar Dec 05 '22 17:12 gman0

What I meant is to just register an existing volume without actually triggering CreateVolume call on the driver. I'm not familiar with Nomad so I'm not sure if it's possible.

The volume itself is "virtual", it's just a reference for cvmfs-csi.

@gman0 Execute the register command can be completed correctly:

nomad volume register cvmfs.volume.hcl 

csi plugin node can correctly access the content on cvmfs, but it seems that it cannot be automatically mounted in the application container

JuiceFS gives a good support use case,we can use it as a reference:

https://github.com/juicedata/juicefs-csi-driver/blob/master/docs/en/cookbook/csi-in-nomad.md

https://github.com/juicedata/juicefs-csi-driver/blob/master/docs/en/introduction.md#mount-by-process-by-process

shumin1027 avatar Dec 06 '22 01:12 shumin1027

csi plugin node can correctly access the content on cvmfs, but it seems that it cannot be automatically mounted in the application container

Is there an error message we could troubleshoot? My first guess would be missing rslave or rshared in the container mount (HostToContainer mount propagation in Kubernetes terminology)? See Example: Automounting CVMFS repositories and a Pod definition example.

https://github.com/juicedata/juicefs-csi-driver/blob/master/docs/en/introduction.md#mount-by-process-by-process

I'm not sure I understood correctly, but cvmfs-csi doesn't distinguish between "mount-by-pod" and "mount-by-process". The cvmfs-csi node plugin needs to be already running on all nodes of the cluster that are expected to use CVMFS volumes (DaemonSet in Kubernetes terminology).

gman0 avatar Dec 08 '22 09:12 gman0

@gman0 Thank you for your help,your guess might be right,nomad seems to be temporarily unsupported mount-propagation

This is the mount information of application container: image

shumin1027 avatar Dec 10 '22 12:12 shumin1027

Thanks for following this up, @shumin1027. We can continue once this is resolved in Nomad.

gman0 avatar Dec 12 '22 09:12 gman0

@gman0 I fixed this issue: https://github.com/hashicorp/nomad/issues/15524, the mount-propagation option can be successfully set when mounting the volume

When I use the host volume and directly mount '/cvmfs' on the host into the container and set mount-propagation to rslave, everything is as expected

But when I use the cvmfs-csi volume and set mount-propagation to rslave, it still not automatically installed in the application containerthe application container

image

shumin1027 avatar Dec 15 '22 10:12 shumin1027