openl3 icon indicating copy to clipboard operation
openl3 copied to clipboard

Supporting multiple GPU models

Open auroracramer opened this issue 6 years ago • 4 comments

Should supporting running the embedding models on multiple-GPUs be prioritized? Here are the pros/cons as I see it (not necessarily equally weighted in terms of importance):

Pros

  • Allows users to take advantage of multiple GPUs for faster running time

Cons

  • Adds an extra parameter to most API calls, though this can be optional
  • Adds meat to the codebase (though we already have it)
  • Can we test this on Travis?

All in all, I think that if we believe that using multiple GPUs will be a common use case, then we should include it. But if it's something that will be rarely used, if at all, we shouldn't prioritize it (at least for an MVP).

auroracramer avatar Nov 01 '18 20:11 auroracramer

Another question that comes up if we support multi-GPU support is what the default number of GPUs used is. Should it be just 1? Or should it be the maximum number of GPUs available?

auroracramer avatar Nov 01 '18 20:11 auroracramer

I think supporting multiple GPU's would be nice, but also definitely not critical for the MVP.

If people have access to multiple GPUs they can always parallelize over the data to make use of all the GPUs, without the need to explicitly support multi-GPU inference.

Also - would this require saving additional model files, since keras multi-GPU model files are stored differently on disk?

justinsalamon avatar Nov 01 '18 20:11 justinsalamon

Fair enough, let's not prioritize this for the MVP. Supporting multiple GPUs wouldn't require saving additional model files, we can make each model supported by multi-GPU inference after loading it

auroracramer avatar Nov 01 '18 21:11 auroracramer

I imagine we could support it via an n_gpu optional arg with a default value of 1.

justinsalamon avatar Nov 01 '18 22:11 justinsalamon