stylegan-encoder icon indicating copy to clipboard operation
stylegan-encoder copied to clipboard

why optimize dlatent rather than qlatent?

Open jcpeterson opened this issue 5 years ago • 5 comments

The dimensionality is dlatent is 9x larger, sampling isn't as simple, interpolations are worse, and mapping to smile directions for example needs more data.

jcpeterson avatar Mar 18 '19 14:03 jcpeterson

+1. Tried to interpolate dlatent and the results didn't seem natural at all.

Definiter avatar Mar 19 '19 23:03 Definiter

Linking to reddit comment by author.

So StyleGAN generator actually contains 2 components:

Generator:

qlatent = normally distributed noise which have shape=(512)

dlatent = mapping_network(qlatent) = shape=(18, 512)

where mapping_network - a fully connected network which transforms qlatent to dlatent

generator(mapping_network(qlatent)) = image

So during training we optimize dlatent instead of qlatent. Optimiziong of qlatent leads to bad results (I can elaborate on it). qlatent is used for features-wise transformation of convolution layers of generator https://distill.pub/2018/feature-wise-transformations/

2) dlatent + multiplier * logreg_coeff; Yes, but I use raw coefficients from logreg, so it doesn't matter are they positive or not.

3) Yes. It somehow works and we can gen relatively similar faces, but less details are saved. It still in progress.

vu0tran2 avatar Mar 24 '19 06:03 vu0tran2

@vu0tran2 Yes I've seen but the "elaboration" was never given. In principle I don't see why it should be worse.

jcpeterson avatar Mar 25 '19 00:03 jcpeterson

I've done some experiments with optimizing dlatent vs qlatent. I've observed that when optimizing qlatent against a real image (I tried a few images of celebrities), the result does not converge to the desired target image. However, when optimizing qlatent against an image generated by sampling from qlatent space, the reconstruction converges quickly.

My intuition is that the space of qlatent is does not represent all human faces. Since qlatent has lower dimensionality than dlatent, it is intuitive to me (pigeonhole principle) that it is capable of representing fewer images.

ndahlquist avatar Sep 09 '19 00:09 ndahlquist

I've done some experiments with optimizing dlatent vs qlatent. I've observed that when optimizing qlatent against a real image (I tried a few images of celebrities), the result does not converge to the desired target image. However, when optimizing qlatent against an image generated by sampling from qlatent space, the reconstruction converges quickly.

My intuition is that the space of qlatent is does not represent all human faces. Since qlatent has lower dimensionality than dlatent, it is intuitive to me (pigeonhole principle) that it is capable of representing fewer images.

I tried to train the same encoding process and find the same problem. Did you align the celebrities images? Since the generated images face landmarks are standard, which means eyes, mouth of all faces are exactly in the same place among all pictures.

I make lots of argumentation on generated images, the encoded result for real images become better but still far away from the same face.

danielkaifeng avatar Mar 04 '20 08:03 danielkaifeng