NonDeepNetworks icon indicating copy to clipboard operation
NonDeepNetworks copied to clipboard

How model parallelize across GPUs?

Open Gumpest opened this issue 4 years ago • 5 comments

Could you introduce more details in parallelizing across GPUs, like how to implement through PyTorch.

Gumpest avatar Dec 03 '21 08:12 Gumpest

You can use PyTorch Lightning instead. It automatically parallelizes the model training across GPUs and also supports TPU with just a single argument.

RahulBhalley avatar Dec 04 '21 05:12 RahulBhalley

We use the nccl backend with PyTorch to parallelize the stream while inference (testing). For training, we use usual distributed setup.

imankgoyal avatar Dec 05 '21 07:12 imankgoyal

So we need at least three GPUs to inference three streams in ParNet?

Gumpest avatar Dec 06 '21 03:12 Gumpest

Yes if you want to do the multi-GPU inference. Otherwise, you can also do single gpu inference but it will be slower.

imankgoyal avatar Dec 06 '21 04:12 imankgoyal

@imankgoyal

foe the edge devcie, using mulit-gpu for inference is expensive, what is your opinion?

twmht avatar May 12 '23 12:05 twmht