XingGAN icon indicating copy to clipboard operation
XingGAN copied to clipboard

Accuracy without Co-attention Fusion Module

Open parakh08 opened this issue 3 years ago • 0 comments

Thanks for this great work, you have mentioned the accuracy with both SA and AS blocks but under the absence of the co-attention fusion module in the paper and I wonder how did you get the result in this case? Did you have a direct FC layer at the end of the attention modules? How can we replicate that result?

parakh08 avatar Feb 07 '22 15:02 parakh08