Shuai Yang

Results 218 comments of Shuai Yang

![image](https://user-images.githubusercontent.com/18130694/195776157-401a4c7a-1aa6-4043-8063-e1a1893f58ed.png)

My checkpoint is trained with color transfer. You need not specify `--fix_color` during training and testing

I see. You can specify the `--fix_style` and `--style_id` to learn one anime style, or change https://github.com/williamyang1991/VToonify/blob/db57c27b4189023a5330c21b015a8e78cc111b87/train_vtoonify_d.py#L245-L250 to (remove the `and args.fix_style`) ``` if not args.fix_color: xl = style.clone() else:...

> You can specify the `--fix_style` and `--style_id` to learn one anime style, In this case, you need to specify `--fix_color` So the style options are `--fix_color --fix_degree --style_degree 0.5...

I think your results look good!

If you want a scenery style transfer model, I think you need to train a StyleGAN on scenery photos and finetune it on the Cartoon scenery images to get a...

![balloon2](https://user-images.githubusercontent.com/18130694/190160779-fdcad956-8654-4fad-9f44-999888b59948.png) In my training information, the loss does not converge, either, which is normal for WGAN-GP https://github.com/VITA-Group/ShapeMatchingGAN/blob/master/src/ShapeMatchingGAN.ipynb ![image](https://user-images.githubusercontent.com/18130694/190161016-edf88870-af2c-4ead-bdd1-c80c1b974079.png)

Our content_encoder.pt is trained on ImageNet291 and synImageNet291, which contains many domains and human faces. Generally, you can expect it to generelize to your new datasets. So you can directly...

In our experiment, we found the pretrained encoder generelized to giraffes, landscapes and art portraits.

We use and modify this code for evaluation https://github.com/clovaai/stargan-v2/blob/875b70a150609e8a678ed8482562e7074cdce7e5/metrics/eval.py Fake images are generated from the testing set of the dataset. For cat2dog, there are 500 testing images (https://github.com/clovaai/stargan-v2/blob/master/README.md#animal-faces-hq-dataset-afhq). For FID,...