Asyrp_official
Asyrp_official copied to clipboard
Bad performance
Thanks for you work!
I tried to reproduce the results for "happy dog" using released pretrained models but the performance was bad.
Here are the settings of inference scripts.
sh_file_name="script_inference.sh"
gpu="0"
config="afhq.yml"
guid="dog_happy"
test_step=40 # if large, it takes long time.
dt_lambda=1.0 # hyperparameter for dt_lambda. This is the method that will appear in the next paper.
CUDA_VISIBLE_DEVICES=$gpu python main.py --run_test \
--config $config \
--exp ./runs/${guid} \
--edit_attr $guid \
--do_train 0 \
--do_test 1 \
--n_train_img 0 \
--n_test_img 32 \
--n_iter 5 \
--bs_train 1 \
--t_0 999 \
--n_inv_step 40 \
--n_train_step 40 \
--n_test_step $test_step \
--get_h_num 1 \
--train_delta_block \
--sh_file_name $sh_file_name \
--save_x0 \
--use_x0_tensor \
--hs_coeff_delta_h 1.0 \
--dt_lambda $dt_lambda \
--add_noise_from_xt \
--lpips_addnoise_th 1.2 \
--lpips_edit_th 0.33 \
--sh_file_name "script_inference.sh" \
--manual_checkpoint_name "dog_happy_LC_dog_t999_ninv40_ngen40_0.pth" \
The pretrained model is "afhqdog_p2.pt"
Some examples of results are
Do you have any suggestions on this?
I am having the same issue as above. I suggest the author(s) download this entire repository and re-download (_p2.pt) the pretrained models shown in the README link, and run the dog script. @Zwette we would probably trace issue #6 as both these two issues happen in my case while testing on happy dogs.
Excuse me. Did you revise the code or something? I came across bad performance when I used both self-trained checkpoints and pretrained models. The script was the same as yours except for the dataset, but something went wrong with the color channels or the encoding format of the reference images as below.
@hycsy2019 did you find a reason for this? I have very similar issues to you...