ResidualAttentionNetwork-pytorch
ResidualAttentionNetwork-pytorch copied to clipboard
about the code "out_interp = self.interpolation1(out_middle_2r_blocks) + out_down_residual_blocks1"
hello ,thank you for your code!
But I have a question about your code.The episode in your code seems to be no such operation in the paper and in the soft mask branch only skip connection have addition operation.Could you help me solve this question?
out_interp = self.interpolation1(out_middle_2r_blocks) + out_down_residual_blocks1
I refer to the caffe version. u can consider it as a trick.
hello dude, now I have the same question too, have you solved it yet now? If so, could you share the experience with me?
hello dude, now I have the same question too, have you solved it yet now? If so, could you share the experience with me?
I change to another model, it works
I refer to the caffe version. u can consider it as a trick.
hi, thanks for your code, when I use 'ResidualAttentionModel_92_32input_update', I will have question as this issus described because of 'AttentionModule_stage1_cifar', but change model to 'ResidualAttentionModel_92', it work, but I cannot pretrained gived model because of lots of mismatch, do you have a good way to load gived model? or is any other pretrained model can use? Thanks!