How do you create attention map?
I read your paper, it says `` In order to generate the attention mapof a conv. layer we first compute the feature maps of this layer, then we raise each feature activation on the power p, and finally we sum the activations at each location of the feature map. For the conv.layers 1, 2, and 3 we used the powers p = 1, p = 2, and p = 4 respectively. ''
What do you do after summing up each neuron's power of activations in these layers? I guess some backpropagation or deconvolution is needed to generate such attention map.
Hi, have you found the solution about how to create attention map? I'm interested in this technique.
I'm interested in this technique, too.