How to get the quantitative results as reported in the paper
Great work!
I am validating your pretrained model on GSO dataset, but I cannot get as high PSNR as reported in your paper due to the camera distance & object scale mismatch with my rendered data. May I know the parameters you have used in rendering GSO and Omni3D so that I can get a better metric on your method?
Btw, I have tried to align using --scale but there is still a subtle difference. Also, I have noticed that your output rendering scale is also not aligned with the zero123++ v1.2 prediction (as shown in the image below. Left: zero123++ v1.2; right: yours)
Since you have also mentioned in #66 that you are using mixed fov=30 & fov=50 for training, will this result in a random output scale of the object?
@mengxuyiGit Hello, can you share your GSO link and evaluation code? I am lacking in this area and need your help. I would appreciate it