Peter Lorenz

Results 13 comments of Peter Lorenz

Launch File: ``` mpiexec -n 1 python image_sample_inpainting.py --batch_size 32 \ --training_mode consistency_distillation \ --sampler multistep \ --ts 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40 \ --steps 40 \ --model_path checkpoints/cd_bedroom256_lpips.pt \ --attention_resolutions 32,16,8 --class_cond False...

U can take any layer of the model. Usually, the last block is used for best results. It can be tricky and you might need to add an Identity layer...

In the original code it looks like this: https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/scripts/run_cifar.py#L132 ``` exp_name, tpu_name, bucket_name_prefix, model_name='unet2d16b2', dataset='cifar10', optimizer='adam', total_bs=128, grad_clip=1., lr=2e-4, warmup=5000, num_diffusion_timesteps=1000, beta_start=0.0001, beta_end=0.02, beta_schedule='linear', model_mean_type='eps', model_var_type='fixedlarge', loss_type='mse', dropout=0.1, randflip=1, tfds_data_dir='tensorflow_datasets',...