Unsupervised-Attention-guided-Image-to-Image-Translation icon indicating copy to clipboard operation
Unsupervised-Attention-guided-Image-to-Image-Translation copied to clipboard

If I want to make attention in background.

Open deep0learning opened this issue 5 years ago • 1 comments

Hi, Thank you for this task. If I want to make attention in the background that I need to change rather than the object. For example, in domain A and B, we have horse images, when translating from domain A - B, I want to keep the same horse in domain B but the background will be changed as domain A. How I can do that? Thank you in advance.

deep0learning avatar Jul 19 '19 17:07 deep0learning

I have the same question with you. In other word, how the Attention Network can output a mask (attention map) to keep eye on the foreground object in unsupervised setup? According to the paper, the network architecture of Generators and Attention Networks are almost same except the final activation function. When the final activation function is sigmoid with output channel is 1, output of the network is attention map. I don‘t know how that works.

Moreover, Figure 7 in the paper shows Attention Network can focus on foreground object in early of training. It is amazing! The losses are adversarial loss and cycle-consistency loss during early of training. There are no label information guides the Attention Network to focus on foreground object.

I am looking forward to discussing with you and the author @deep0learning @AlamiMejjati

jian3xiao avatar Mar 15 '20 04:03 jian3xiao