FfDL icon indicating copy to clipboard operation
FfDL copied to clipboard

external NFS storage support

Open cloustone opened this issue 6 years ago • 13 comments

Hello, @FfDL We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS. The following steps are our adaptions for NFS.

  1. Deploy an external NFS server out of kubernetes.
  2. Add PVs declaration in templates folder
  3. Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment

We are confirming the above method, however, new question already occurred. If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?

Would you please confirm the above method and question, or provide a right solution to us.

Thanks

cloustone avatar Jun 12 '18 04:06 cloustone

@cloustone there's work going on on cleaning that tight integration that we have and we should have something out relatively soon.

the thought process is that you can create a PVC, load all the training data to this PVC and in the manifest file provide a pvc reference id/name similar to the way you provide s3 details in manifest and the learner can mount that pvc rather than the s3 storage and use the data

atinsood avatar Jun 13 '18 02:06 atinsood

@atinsood thanks for your reply. I just used dynamic external storage with NFS to deploy model train. It seems ok.

cloustone avatar Jun 13 '18 12:06 cloustone

@cloustone would love to get more details about how you did this. We would love to include a PR with a doc stating how to leverage NFS, with the steps you defined above

"The following steps are our adaptions for NFS.

Deploy an external NFS server out of kubernetes. Add PVs declaration in templates folder Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment"

animeshsingh avatar Jun 13 '18 17:06 animeshsingh

@cloustone I just used dynamic external storage with NFS to deploy model train. It seems ok. curious on how you got this going from a technical perspective :)

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training (basically change the https://github.com/IBM/FfDL/blob/master/lcm/service/lcm/learner_deployment_helpers.go#L493 and add the volume mount)

I wonder if you went this route or a different one

atinsood avatar Jun 14 '18 00:06 atinsood

@atinsood Yes, the method is almost same with what you provided.

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training.

cloustone avatar Jun 15 '18 03:06 cloustone

@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/

https://github.com/IntelAI/vck

we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines.

this is an interesting approach and should work well if you don't have a need of isolation of training data for every training.

atinsood avatar Jun 17 '18 18:06 atinsood

@atinsood Thanks, we will try this method according to our requirement.

cloustone avatar Jun 18 '18 13:06 cloustone

Hello, @FfDL We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS. The following steps are our adaptions for NFS.

  1. Deploy an external NFS server out of kubernetes.
  2. Add PVs declaration in templates folder
  3. Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment

We are confirming the above method, however, new question already occurred. If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?

Would you please confirm the above method and question, or provide a right solution to us.

Thanks

@cloustone Can you please detailly tell me how to use NFS? I also want to use NFS but I do not know how to use it. Which files do you change and how to change? Thank you very much.

Eric-Zhang1990 avatar Jan 16 '19 03:01 Eric-Zhang1990

@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/

https://github.com/IntelAI/vck

we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines.

this is an interesting approach and should work well if you don't have a need of isolation of training data for every training.

@atinsood Do you have add this method into FfDL? Or do you have document about how to use this method in FfDL? Thank you very much.

Eric-Zhang1990 avatar Jan 16 '19 03:01 Eric-Zhang1990

@Tomcli @fplk did you try the intel vck approach with ffdl

atinsood avatar Jan 16 '19 04:01 atinsood

@atinsood @Eric-Zhang1990 No, we do not currently have vck integration in FfDL.

@cloustone said:

and you'd end up accessing the data as you would access local data on those machines.

Which I think just implies a host mount, which I think is enabled in the current FfDL. So you could give that a try.

@cloustone said:

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training.

We do have an internal PR that enables use of generic PVCs for training and result volumes. I don't think we need a configmap? The idea is that PVC allocation is done by some other process, and then we just point to the training data and result data volumes by name, in the manifest.

Perhaps we can go ahead and externalize this in the next few days, at least on a branch, and you could give it a try. Let me see what I can do.

sboagibm avatar Jan 16 '19 14:01 sboagibm

@sboagibm Thank you for your kind reply. You say "then we just point to the training data and result data volumes by name, in the manifest.", can you give me a example of manifest file using local path of host?

I find a file in "https://github.com/IBM/FfDL/blob/vck-patch/etc/examples/vck-integration.md", what you say is like this manifest file? If it is, can I add multi learners in it?

  Thank you very much.

Eric-Zhang1990 avatar Jan 17 '19 01:01 Eric-Zhang1990

@cloustone @atinsood @sboagibm How to use NFS to store data to start training jobs?? Can you provide more detail docs for us?? Thanks.

Eric-Zhang1990 avatar Feb 26 '19 07:02 Eric-Zhang1990