Xingang Pan
Xingang Pan
@h1-ti Thanks for your interest in our work. To train on your own images, your need to perform the so called "GAN inversion" process to obtain the latent codes. There...
@h1-ti I haven't try idinvert_pytorch before, but https://github.com/rosinality/stylegan2-pytorch (projector.py) works for me.
@FrancisYu2020 Yes, you are right.
@FrancisYu2020 Thanks for your interest. Hope you are having fun with it : )
@NOlivier-Inria Hi, I haven't try tuning the parameters for FFHQ datasets. But I think one reason why it's hard to work on FFHQ is that the images of FFHQ are...
@leelang7 Do you have 4 GPUs in your server? The script would require 4 GPUs by default. If not, you may change the script to: ``` python run.py \ --config...
@leelang7 For the first error, you should specify what ${CONFIG} and ${EXP} are. For example: ``` python run.py \ --config configs/car.yml \ 2>&1 | tee results/car/log.txt ``` For the second...
@raagapranitha Hi, thanks for your interest in our work. Note that our method is an online method, which means you need to do unsupervised training for each test image. For...
@anyaviswa 1. The `self.depth` in `forward_step1` is considered for evaluation (i.e., line 470 at `model.py` file). 2. Yes, you are right.
@FrancisYu2020 Authors of StyleGAN2 have implemented a fused bias+activation operation (those under `gan2shape/stylegan2/stylegan2-pytorch/op`), which is used in StyleGAN2 and is faster than using the original Pytorch's implementation. So compiling this...