magmadnn
magmadnn copied to clipboard
[FEATURE REQUEST] Easier integration of model parallelism
Is your feature request related to a problem? Please describe. Models are not distributed in parallel by default (i.e. when they exceed device memory). Thus the user must first encounter an Out of Memory error and then employ some model parallelism techniques.
Describe the solution you'd like
The Model
class or Compute Graph
should be able to detect this and distribute accordingly.