Hang Zhang
Hang Zhang
There are many details in the implementation for reproducing SoTA result. Synchronized Batch Normalization is one of the most important key in semantic segmentation.
You may start with my repo https://hangzhang.org/PyTorch-Encoding/experiments/segmentation.html
removes the parameters with name consisting `num_batches_tracked` should work for older pytorch version
```python tparams = net.state_dict() for i, (k,v) in enumerate(tparams.items()): if 'num_batches_tracked' in k: tparams.pop(k) ```
Please use PyTorch 1.0
Please install PyTorch 1.0
Please see Gluon CV for training details
I rewrite the model in PyTorch and dump the weights
I think the best way to implement SyncBN is doing the hacky thing in backend just like caffe implementation https://github.com/yjxiong/caffe/pull/107/files
Why your previous implementation is not compatible with NNVM?