FfDL
FfDL copied to clipboard
external NFS storage support
Hello, @FfDL We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS. The following steps are our adaptions for NFS.
- Deploy an external NFS server out of kubernetes.
- Add PVs declaration in templates folder
- Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment
We are confirming the above method, however, new question already occurred. If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?
Would you please confirm the above method and question, or provide a right solution to us.
Thanks
@cloustone there's work going on on cleaning that tight integration that we have and we should have something out relatively soon.
the thought process is that you can create a PVC, load all the training data to this PVC and in the manifest file provide a pvc reference id/name similar to the way you provide s3 details in manifest and the learner can mount that pvc rather than the s3 storage and use the data
@atinsood thanks for your reply. I just used dynamic external storage with NFS to deploy model train. It seems ok.
@cloustone would love to get more details about how you did this. We would love to include a PR with a doc stating how to leverage NFS, with the steps you defined above
"The following steps are our adaptions for NFS.
Deploy an external NFS server out of kubernetes. Add PVs declaration in templates folder Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment"
@cloustone I just used dynamic external storage with NFS to deploy model train. It seems ok.
curious on how you got this going from a technical perspective :)
thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training (basically change the https://github.com/IBM/FfDL/blob/master/lcm/service/lcm/learner_deployment_helpers.go#L493 and add the volume mount)
I wonder if you went this route or a different one
@atinsood Yes, the method is almost same with what you provided.
thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training.
@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/
https://github.com/IntelAI/vck
we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines.
this is an interesting approach and should work well if you don't have a need of isolation of training data for every training.
@atinsood Thanks, we will try this method according to our requirement.
Hello, @FfDL We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS. The following steps are our adaptions for NFS.
- Deploy an external NFS server out of kubernetes.
- Add PVs declaration in templates folder
- Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment
We are confirming the above method, however, new question already occurred. If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?
Would you please confirm the above method and question, or provide a right solution to us.
Thanks
@cloustone Can you please detailly tell me how to use NFS? I also want to use NFS but I do not know how to use it. Which files do you change and how to change? Thank you very much.
@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/
https://github.com/IntelAI/vck
we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines.
this is an interesting approach and should work well if you don't have a need of isolation of training data for every training.
@atinsood Do you have add this method into FfDL? Or do you have document about how to use this method in FfDL? Thank you very much.
@Tomcli @fplk did you try the intel vck approach with ffdl
@atinsood @Eric-Zhang1990 No, we do not currently have vck integration in FfDL.
@cloustone said:
and you'd end up accessing the data as you would access local data on those machines.
Which I think just implies a host mount, which I think is enabled in the current FfDL. So you could give that a try.
@cloustone said:
thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training.
We do have an internal PR that enables use of generic PVCs for training and result volumes. I don't think we need a configmap? The idea is that PVC allocation is done by some other process, and then we just point to the training data and result data volumes by name, in the manifest.
Perhaps we can go ahead and externalize this in the next few days, at least on a branch, and you could give it a try. Let me see what I can do.
@sboagibm Thank you for your kind reply. You say "then we just point to the training data and result data volumes by name, in the manifest.", can you give me a example of manifest file using local path of host?
I find a file in "https://github.com/IBM/FfDL/blob/vck-patch/etc/examples/vck-integration.md", what you say is like this manifest file? If it is, can I add multi learners in it?
Thank you very much.
@cloustone @atinsood @sboagibm How to use NFS to store data to start training jobs?? Can you provide more detail docs for us?? Thanks.