CCNet
CCNet copied to clipboard
about the new support for pytorch 1.x
The new support for pytorch 1.x is much better to use for multi-gpu. So does it achieve the same performance for training as previous version for pytorch 0.4.1? @speedinghzl
@mingminzhen Thanks for asking. They could achieve the same performance when both use OHEM. Without OHEM, the new version could achieve 78.5+ mIOU, which is lower than the previous one. So the new version is still under improving.
@speedinghzl So do you use ohem threshold 0.6 or 0.7 to achieve the same performance?
@speedinghzl Actually, when I use my network in the new version, I can just get lower performance. I am not sure what happed? It seems you use DistributedDataParallel from apex, inplace_sync_bn from new version. Is possible these two affect final result?
@mingminzhen I think the problem is inplace_sync_bn.