Roman Trigubenko
Roman Trigubenko
Hello @benjaminpkane, thank you for the prompt reply. Unfortunately, I can't provide a full code related to the issue due to NDA, but I'll describe it in detail as much...
@benjaminpkane If we use pytorch.nn.DataParallel (single-process multi-thread parallelism) to use severe GPUs, on the first sight everything alright except UserWarning rising every epoch during model training. Not sure that it...
Hi @brimoor, Thank you for the hint! After the modification, there is no more UserWarning mentioned above and dataset loading as intended with pytorch.nn.DataParallel. At the same time using torch.nn.parallel.DistributedDataParallel...
Sounds like a plan, thank you for the prompt support. Should we close this issue until you add support for distributed training workflow or keep it open?
Hi, any updates?