ssd.pytorch
ssd.pytorch copied to clipboard
Number of Priors wrong when using multi gpus
Although issue #20 has said that this problem is solved in the latest version, I still have this problem, i.e., the number of Prior boxes are copied when it run forward multi gpus. It has been confusing me for a long time. Hope someone can give some instructions about this! Thx a lot!
In the training script line no. 99 use, net = torch.nn.DataParallel(ssd_net, device_ids=[0])