pixel2style2pixel
pixel2style2pixel copied to clipboard
loss jump problem
The following are my training parameters, which are basically based on the parameters you gave me. The data set used is FFHQ training. Why is there such a big jump at 15k? The generated pictures are not human faces at all. They are images with no rules. What is my problem? I hope the author has seen it and can do me a favor to reply. Thank you so much!
python scripts/train.py
--dataset_type=ffhq_encode
--exp_dir=/root/autodl-tmp/path/to/experiment
--workers=8
--batch_size=4
--test_batch_size=4
--test_workers=8
--val_interval=2500
--save_interval=5000
--encoder_type=GradualStyleEncoder
--start_from_latent_avg
--lpips_lambda=0.8
--l2_lambda=1
--id_lambda=0.1
Hi, could you please provide some images that are outputted by the model before the jump and after the jump?