Autofocus-Layer
Autofocus-Layer copied to clipboard
question about the autofocus implementation
In https://github.com/yaq007/Autofocus-Layer/blob/master/models.py Line 180 why do you use x.detach?
Here we do not want the attention map to influence the weights update in the main branch. Therefore, we use x.detach to treat the features extracted from the main branch as an independent input to the attention module.
if we use x.detach, then how can i update the weights of the convs (such as convatt1, convatt12) on the attention branches ?