Synchronized-BatchNorm-PyTorch
Synchronized-BatchNorm-PyTorch copied to clipboard
Synchronized Batch Normalization implementation in PyTorch.
Hi~ Thanking for your code firstly ! I use the SyncBatch for training SSD, when training I can get 46.81% mAP after 10 epoches finishes. However, when I use the...
How do I output the model code as onnx some error export like tihs ``` RuntimeError: Unsupported: ONNX export of batch_norm for unknown channel size. ```
Hi, Thank you for the great code. I have looked at the related issues but it turns out that it doesnt help in my case. I have a network using...
Hi, Good job! I tried to used it as ``` device=torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') model = Model(...) model = nn.DataParallel(model, device_ids=[0, 1]) model = convert_model(model).to(device) ``` However, it stucked...
Hi ~ I Use ` if torch.cuda.device_count() > 1: model = torch.nn.DataParallel(model) model = bnconvert(model) model.cuda() ` to use sync-bn during multi-gpu training, but when training the network, it looks...
First of all, thank you for the implementation. It's very helpful. I have one question. After sync batch norm is applied, it consumes more GPU memory than normal batch norm....
Hi, Thank you for your work and sharing. I try to use `convert_model` function on my own code, for example: ``` cudnn.benchmark = True net = Network() net.cuda() net =...
Hi, thanks a lot for your code. But when I apply this code to my implemented e2e version of FPN, some wired things happen. If I use 8 cards, the...