Unsupervised-Attention-guided-Image-to-Image-Translation
Unsupervised-Attention-guided-Image-to-Image-Translation copied to clipboard
Interpreting domain descriptive objects (importance of background variety)
Hi,
First of all, thanks for your work. It has been very interesting to read up on. I'm currently doing my Master's thesis and am referencing your work.
I have a question regarding your paper:
Do the image domains need to have varying backgrounds in order for the attention networks to correctly attend the foreground?
For example, the horse2zebra dataset has high background variance in both domains [1, 2]. Is that a general requirement?
Would greatly appreciate any pointers.
Best regards, Johan
--
[1]: Zebra domain backgrounds: enclosures, meadows, savanna [2]: Horse domain backgrounds: meadows, beaches, open landscapes