PUT icon indicating copy to clipboard operation
PUT copied to clipboard

Paper 'Reduce Information Loss in Transformers for Pluralistic Image Inpainting' in CVPR2022

Results 28 PUT issues
Sort by recently updated
recently updated
newest added

Pvqvae.yaml trains first, then transformer.yaml train second Is it right? I feel confused, don't know how to start ![1661345559019](https://user-images.githubusercontent.com/105440384/186423285-f0f08e10-0248-4992-a1d1-c4a262e17185.png)

Hi, Thanks for the source code, the result looks great. Btw, do you have a simple script to do the inference for a single image (and mask)? Thanks.

作者您好! 想请问,您的所有代码都可以在pytorch1.7.1下实现么? 我在安装虚拟环境时出现问题,因为kornia==0.6.11总是与pytorch1.7.1不匹配,想问您是怎么解决这个问题的? 期待您的回复!

![0b67ab49e96604b3e99fc949991133e](https://github.com/liuqk3/PUT/assets/74124390/439d6631-a9cf-48d0-a70a-5761cde7da00) ![4369556c30c7ec5a744e15abe324004](https://github.com/liuqk3/PUT/assets/74124390/2b623a30-6316-4227-abf9-73c6420a1430) ![db3289873aea61e95f9cf672bd667db](https://github.com/liuqk3/PUT/assets/74124390/82fdd060-cc46-45ba-b55b-22be53ba23d8) Hello, when training on my own dataset and using the command python scripts/inference.py --func inference_inpainting --name OUTPUT/cvpr2022_transformer_ffhq/checkpoint/last.pth --input_res 256,256 --num_token_per_iter 100 --num_token_for_sampling 300 --num_replicate 1 --image_dir data/1...

Dear seniors, thank you very much for your contributions. When I debug Foimech in Pantinwisproved/Trened Trans Formmermoder, I get the following issue: (ImgSyn) liu@liu-ubuntu-18:~/ZZB/PUT-main/scripts$ python inference.py --func inference_inpainting --name OUTPUT/cvpr_2024_p_vqvae_ffhq/checkpoint/last.pth...

Thanks for the excellent code, when I inference the picture. gt: ![crop](https://github.com/liuqk3/PUT/assets/38747253/61527135-27f9-439a-9ca8-75449980f962) mask: ![crop](https://github.com/liuqk3/PUT/assets/38747253/a5ed1879-bf7d-4abb-b8c8-b35c7975d680) Lama result:(The result of Lama https://github.com/advimman/lama) ![res3](https://github.com/liuqk3/PUT/assets/38747253/ba16e439-1283-4144-8a3a-1fa95c48359d) PUT result:(The result of model tpami2024_vit_base_naturalscene_res512) ![crop](https://github.com/liuqk3/PUT/assets/38747253/169a8a6d-9413-4c66-87de-15d2ebf62028) python scripts/inference.py...

作者您好!我想在CelebAHQ上重新训练一下PUT,训练集有2.7万张,请问第一阶段PQVAE和第二阶段UQ-Transformer大概要训练多久呢?我之前训练过 第一阶段训练了150个epoch 第二阶段训练了180个epoch 两个阶段共训练了将近一个周 但修复结果不太理想 如下: ![image](https://github.com/user-attachments/assets/140eabbc-9ccb-4ee6-9349-426420082c2a)