deep_qa icon indicating copy to clipboard operation
deep_qa copied to clipboard

Documentation for doing model parallelism on multiple GPUs

Open matt-gardner opened this issue 7 years ago • 4 comments

With dropping theano support, it should be easy to make our models use multiple GPUs, not just with batch parallelism, and to put some parts of the model on the CPU (e.g., the embedding layer, as recommended by Matt Peters). I think this is pretty straightforward, but I haven't done it before. We should:

  1. Write some documentation with recommendations for how and when to use this (thinking of people new to the codebase and to deep learning in general; can we give them some guidance on how to structure a model for optimal efficiency?).
  2. Implement some reasonable defaults, like putting the embedding layer on the CPU, in TextTrainer.

matt-gardner avatar Apr 23 '17 20:04 matt-gardner

#326 does point 2 above, but not point 1 yet.

matt-gardner avatar Apr 29 '17 16:04 matt-gardner

With the batch parallelism PR merged, I'm renaming this issue to focus on the one remaining thing: I believe that models can currently use model parallelism if you want, by using device scopes. Making sure this works and providing some documentation for it would be nice, but not super high priority.

matt-gardner avatar Jun 09 '17 22:06 matt-gardner

I think that the more important aspect of parallelism still left is to get it working with the various types of data generators/padding stuff we have, rather than model parallelism, but yeah in general it would be nice to double check that this works as smoothly as it might do.

DeNeutoy avatar Jun 09 '17 23:06 DeNeutoy

Agreed, hence the P2.

matt-gardner avatar Jun 10 '17 03:06 matt-gardner