Qiankun Liu
Qiankun Liu
Can you provide more details or examples?
No. Just ``` python scripts/inference.py --name OUTPUT/transformer_exp_name/checkpoint/last.pth --func inference_complet_sample_in_feature_for_evaluation --gpu 0 --batch_size 1 ``` is enough.
@Lecxxx , It seems that the model is not load correctly. You can have a look at this.
@Lecxxx , Yes. We follow ICT, the testing images on FFHQ, Places2 and ImageNet are exactly the same with ICT. Only 800 images are used for inference to get the...
Hi, @abhinavisgood , Sorry for the delayed reply and thanks for your interests! The current `inference.py` only support the images and masks provided in the config file, which will be...
It is strongly suggested to use `inference_inpainting.py` for evaluation. And the relative path should be provided. For example: ``` python scripts/inference_inpainting.py --func inference_inpainting \ --name OUTPUT/transformer_exp_name/checkpoint/last.pth \ --input_res 256,256 \...
Thanks for your interests in our work. If you use our provided P-VQVAE to train the second stage transformer on `512x512`, one thing should be kept in mind is that...
Hi @zhangbaijin , As you said, your dataset is small. The training time depends on the number of images in training set and the number of epochs you provided. After...
Did you add `return_data_keys: [image, mask,relative_path]` in the `validation_datasets`? According to my experience, if you did, it should work. But you should have a debug about this.
你好,请参考这个问题[#29](https://github.com/liuqk3/PUT/issues/29#issuecomment-1868981857)