VToonify
VToonify copied to clipboard
The generated data for training VToonify-D are bad.
I saved the generated data for training VToonify-D and I found some generated portrait data are bad. The following 3 pictures are generated input, genertaed portrait, and inference reslut by your released trained VToonify-D model. You can notice genertaed portrait and inference reslut are not match.
The following are the steps I get the genertaed portrait:
-
pre-train encoder
python -m torch.distributed.launch --nproc_per_node=8 --master_port=8765 train_vtoonify_d.py \ --iter 30000 --stylegan_path ./checkpoint/cartoon/generator.pt --exstyle_path ./checkpoint/cartoon/refined_exstyle_code.npy \ --batch 1 --name vtoonify_d_cartoon --pretrain
-
I trained VToonify-D with following script and save the generated portrait for training. The generated portrait for training are bad.
python -m torch.distributed.launch --nproc_per_node=8 --master_port=8765 train_vtoonify_d.py \ --iter 2000 --stylegan_path ./checkpoint/cartoon/generator.pt --exstyle_path ./checkpoint/cartoon/refined_exstyle_code.npy \ --batch 4 --name vtoonify_d_cartoon --fix_color --fix_degree --style_degree 0.5 --fix_style --style_id 26
My understanding from your paper, the generated portrait and the inference reslut of trained VToonify-D model are the same. Did I do anything wrong? How you get VToonify-D (cartoon) exactly?
Looking forward to your reply.
generated input
generated portrait for training
inference reslut by your released trained VToonify-D model