zhujiapeng
zhujiapeng
Hi Where can I find the pre-trained caffemodel for inception-v4? I found the input size of inception-v4 is 299*299, but my image set data is 256*256, I don't have the...
@soeaver 蟹蟹!
See https://github.com/genforce/idinvert/issues/11#issuecomment-671303849
Once a StyleGAN is well-trained, you will find different `w` control different image semantics, you can refer the style-mixing part in original StyleGAN paper. In our paper, we first search...
Yes > would I have to write a version of diffuse in utils/inverter.py that accepts a mask instead of x/y coordinates of the rectangular foreground bounding box?
You can refer to lines 80 and 81 in `interpolate.py`.
Sorry, I am a little busy those days and do not have time to do this. But you could refer to `interpolate.py` scripts, lines 107 and 108 get the numpy...
We use 100 iterations in our paper.
The attribute vectors are related to the models but not the inversion methods.
Take the first 65000 images as the training set and take the rest as the test set.