zyxu1996
zyxu1996
It would definitely work since it adopts a similar structure with seblock. The main key is the multiple loss functions. As for the SAblock, I don't think it is convincing.
We ref the paper: ResT: An Efficient Transformer for Visual Recognition. Efficient_T is compose of resT backbone and mlphead, you can set up the autorun.sh like ‘--models resT --head mlphead’
You should use lr=0.0003, bs=16, img_size=512
> When I train the DANet model, the results are shown in the figure, and the results do not reach the results in Table 10 of your paper.  BatchSize...
> Hi, your test code is a multi-GPU test, but I'm using a single GPU for training, how can I change the test code? I ask you to help me....