motion-cosegmentation
motion-cosegmentation copied to clipboard
Mask size with first order model
Hi, I tried training my own first order model network with slight modification of first order model code. Basically just replacing one perceptual loss with another.
When I try plugging in that new model in the notebook you provided in this repo in the final part where supervised segmentation is used along with first order model, I get the following:
However when I use your provided pretrained first order model, I get the following where the face is stretched across entire target face.
Obviously source hair is the issue because source image only has part of face visible and that's the part that is missing in the mask as well, but doesn't seem to bother your model which properly covers entire target face. Do you have intuition why this might be happening?
Hi, in the First Order Model repo it is said that it is possible to do some face-swapping by modifying the method. How did you do that? What did you change? Thank you