zhujiapeng

Results 45 comments of zhujiapeng

For FFHQ, we used the first 65k images as the training dataset and the last 5k images as the test datasets. For LSUN (tower and bedroom), we randomly sampled 100k...

Yes, in `mix_style.py` of line 67, just give the specific layer to the `mix_layers `

The stylegan generator of genforce is refactored. The whole training pipeline can be found [here](https://github.com/genforce/idinvert), this pytorch repo only supports inference.

No, the genforce supports training the encoder in this [repo](https://github.com/genforce/ghfeat).

We do not do such experiments but feel free to try.

Yes, you can train on your dataset using this [repo](https://github.com/genforce/idinvert). Besides, in this repo, we also provide some training models on the LSUN tower and bedroom.

Yes, you are right.

Just using the last 7000 images in the FFHQ dataset.

Those are several images collected from the Web.

The second dataset `~/datasets/custom/custom-r07.tfrecords` you used also should be the resolution of 256x256, this is used as the test or the validation datasets for the training.