self-attention-gan
self-attention-gan copied to clipboard
Residual vs attentional blocks
All generator and discriminator types implemented here are made of either block()
or block_no_sn()
modules, which either way have internally a residual connection x_0 + x
by default. However, in the associated paper residual vs. attentional blocks are compared as if both architectures were exclusive, one or the other. So, does the attentional architecture reported in the paper includes also residual blocks or this implementation does not fully follow the reported architectures?
Thanks.