Self-supervised-Monocular-Trained-Depth-Estimation-using-Self-attention-and-Discrete-Disparity-Volum icon indicating copy to clipboard operation
Self-supervised-Monocular-Trained-Depth-Estimation-using-Self-attention-and-Discrete-Disparity-Volum copied to clipboard

How to visualize the attention map

Open Bowwowlol opened this issue 3 years ago • 0 comments

Hello, thanks for your outstanding research!! I have a question about the visualization of the attention map, the output of the context module channels size is 512, your paper indicated that the attention map was selected randomly from the output of the context module, so I can select the first channel of the output of the context module and then upsample to the original image size?

Bowwowlol avatar Oct 07 '22 08:10 Bowwowlol