PUT
PUT copied to clipboard
Paper 'Reduce Information Loss in Transformers for Pluralistic Image Inpainting' in CVPR2022
Pvqvae.yaml trains first, then transformer.yaml train second Is it right? I feel confused, don't know how to start 
Hi, Thanks for the source code, the result looks great. Btw, do you have a simple script to do the inference for a single image (and mask)? Thanks.
关于环境的问题
作者您好! 想请问,您的所有代码都可以在pytorch1.7.1下实现么? 我在安装虚拟环境时出现问题,因为kornia==0.6.11总是与pytorch1.7.1不匹配,想问您是怎么解决这个问题的? 期待您的回复!
   Hello, when training on my own dataset and using the command python scripts/inference.py --func inference_inpainting --name OUTPUT/cvpr2022_transformer_ffhq/checkpoint/last.pth --input_res 256,256 --num_token_per_iter 100 --num_token_for_sampling 300 --num_replicate 1 --image_dir data/1...
Dear seniors, thank you very much for your contributions. When I debug Foimech in Pantinwisproved/Trened Trans Formmermoder, I get the following issue: (ImgSyn) liu@liu-ubuntu-18:~/ZZB/PUT-main/scripts$ python inference.py --func inference_inpainting --name OUTPUT/cvpr_2024_p_vqvae_ffhq/checkpoint/last.pth...
Thanks for the excellent code, when I inference the picture. gt:  mask:  Lama result:(The result of Lama https://github.com/advimman/lama)  PUT result:(The result of model tpami2024_vit_base_naturalscene_res512)  python scripts/inference.py...
请教训练时间
作者您好!我想在CelebAHQ上重新训练一下PUT,训练集有2.7万张,请问第一阶段PQVAE和第二阶段UQ-Transformer大概要训练多久呢?我之前训练过 第一阶段训练了150个epoch 第二阶段训练了180个epoch 两个阶段共训练了将近一个周 但修复结果不太理想 如下: 