Hang Zhang

Results 296 comments of Hang Zhang

https://github.com/zhanghang1989/rfconv

Your PyTorch version is slightly out-of-date. You may update pytorch or remove that argument.

What the batch-size per-gpu? Ideally, it should be 16 or greater. If not, please try using SyncBatchNorm. The reason is that there is BatchNorm in the Spli-Attention module

That's fine. Ideally, the model using BN should not be sensitive for initialization method.

Hi @FrancescoSaverioZuppichini , the bn+relu is applied to the first conv, because it adds non-linearity between two convs (otherwise it is equivalent to a single one). There is no bn+relu...

ResNeSt 是针对 bottleneck 设计的,直觉上可能跟 basic block 不太匹配

If you cannot download the model, please reinstall the package. Pretrained model can be acquired by ```python # using ResNeSt-50 as an example from resnest.torch import resnest50 net = resnest50(pretrained=True)...

> @zhanghang1989 thanks for sharing the split attention module implementation, can we integrate this with FPN module instead of Resnet backbone ?? Yes, that should work.

@Jerryzcn , You have done some similar things, right?