residual-attention-network
residual-attention-network copied to clipboard
Confused of Interp layer
According to your paper, the "Interp" layer does a bilinear interpolation to upsample its previous layer's output. But "Interp" layer has two inputs in your implementation. I'm not very familiar with caffe. Could you provide some documentations of "Interp" layer in caffe? Is there any alternatives in tensorflow or pytorch?
The first input to the layer is the Tensor to be resized, the second input dictates the target size. In Tensorflow, you can use tf.image.resize_images with Bilinear interpolation.
Got it! Thanks!