zhujiapeng
zhujiapeng
See this [repo](https://github.com/genforce/genforce).
Please refer to the `README.md` of this [repo](https://github.com/genforce/idinvert)
This is the first version of our encoder structure, the weights give here are exactly match the encoder's structure here. We updated the TensorFlow version a little from the first...
You can try on the images with the resolution of 256x256 and see if the problem still happens.
Your environment may cause it. I find some solutions, such as [here](https://blog.csdn.net/Leo_Xu06/article/details/82023330) and [here](https://stackoverflow.com/questions/43990046/tensorflow-blas-gemm-launch-failed), and see if these can help.
Yes, the provided boundaries are obtained using InterFaceGAN. For other classes, you also can use InterFaceGAN or other methods such as the one in [here](https://github.com/zhujiapeng/resefa).
Yes, InterfaceGAN can be used to find the editing directions. However, you need some classifiers trained on the scene (*e.g.,* Places) to obtain your expected direction.
For your medical dataset, is the generator trained by yourself?
Conditional StyleGAN? Do you mean you are using one-hot labels?
1. The initial reconstruction is still a realistic image? After getting the encoded latent codes, did you feed them to the synthesized network directly or fed them to the mapping...