Shuai Yang
Shuai Yang
I haven't tried the current version on multiple styles. But since the model is trained on image patches, maybe you can crop both the fire image and water image into...
Since our model is a fully convolutional network, trained on 256*256 images, our model can handle arbitrary image size. Therefore, in this example, we just feed the whole image into...
I used one GPU with 8GB memory. I don't remember the exact training time. I think it should take several days but within one week. If you uses GPUs with...
Here is an example of my fine-tuning loss: --- load options --- batchsize: 8 datasize: 80 epoch: 1 gpu: 1 load_model_name: ../save/tetgan-aaai.ckpt outer_iter: 20 save_model_name: ../save/tetgan-oneshot.ckpt style_name: ../data/oneshotstyle/3-train.png supervise: 1...
I don't know. I haven't encountered this problem in TETGAN. https://github.com/williamyang1991/TET-GAN/blob/bdfca141fc14c5917fd9be8d2bc23870f9ad3288/src/models.py#L402-L405 Maybe you should check which one gives nan, `x_feature`, `x_fake` or `fake_output`
You should just prepare a font dataset and train another TET-GAN on font dataset. Let TET-GAN trained on font be G1, and trained on text effects be G2. To transfer...
There is no difference between G1 and G2. They are trained independently.
This part of code is from stylegan2-pytorch, maybe you can find the solution at its issue pages https://github.com/rosinality/stylegan2-pytorch/issues/1
You can also follow this. https://github.com/williamyang1991/DualStyleGAN/tree/main/model/stylegan/op_cpu Change the code to a version which does not require cpp_extension.
Since caricatures have large distortions and intra-domain diversity, the results with style controls on caricature style are not as good as those on other styles (e.g., suffering from ghosting artifacts...