mvsnerf icon indicating copy to clipboard operation
mvsnerf copied to clipboard

DTU evaluation accuracy?

Open hdjang opened this issue 3 years ago • 7 comments

Hi, thanks for sharing such a great work!

I have a simple question regarding evaluation accuracy on DTU dataset when using the provided pre-trained checkpint.

I got the numbers below that are slightly lower than the ones in the paper. What do I miss? (For LLFF dataset, I got the same evaluation accuracy as the paper using the same pre-trained checkpoint. I used renderer.ipynb to evaluate as suggested in the repo.)

hdjang avatar Jan 11 '22 17:01 hdjang

image

hdjang avatar Jan 12 '22 07:01 hdjang

Hi, have you re-produced the results on DTU dataset? I used the same setting of the author, but I got much lower PSNR than author in paper.

liustu avatar Jan 16 '22 02:01 liustu

I have the ame question. Maybe authors provide a sub-optimal model for us. By the way, I re-produced the results on DTU and obtain a higher PSNR than given checkpoint. psnr 26.673; ssim: 0.931; lpips: 0.172

zhangchuanyi96 avatar Mar 30 '22 08:03 zhangchuanyi96

image

I get the same numbers as hdjang. I wonder why the PSNR values without fine-tuning in the paper correspond to results where always the 3 nearest views are used as input for each validation image whereas the description in the paper tells that always 3 fixed views are used

For each testing scene, we select 20 nearby views; we then select 3 center views as input, 13 as additional input for per-scene fine-tuning, and take the remaining 4 as testing views.

With 3 fixed views, the PSNR values I obtain for the given checkpoint by using the renderer.ipynb are significantly lower (21.05 for DTU). Could the authors clarify that?

chrschinab avatar May 25 '22 10:05 chrschinab

@zhangchuanyi96 Hello, I 'm trying to reproduce the results following the command in ReadMe: ''' python train_mvs_nerf_pl.py --with_depth --imgScale_test 1.0 --expname mvs-nerf --num_epochs 6 --N_samples 128 --use_viewdirs --batch_size 1 --dataset_name dtu --datadir ../data/mvs_training/dtu ''' But the results is much lower than the results in paper. Could you please tell me the command or the hyperparameters you use when re-producing the DTU results? Thank you.

otakuxiang avatar Sep 02 '22 12:09 otakuxiang

@zhangchuanyi96 Hello, I 'm trying to reproduce the results following the command in ReadMe: ''' python train_mvs_nerf_pl.py --with_depth --imgScale_test 1.0 --expname mvs-nerf --num_epochs 6 --N_samples 128 --use_viewdirs --batch_size 1 --dataset_name dtu --datadir ../data/mvs_training/dtu ''' But the results is much lower than the results in paper. Could you please tell me the command or the hyperparameters you use when re-producing the DTU results? Thank you.

It has been months, so sadly I can't remember the exact reproducing process. I can only vaguely remember that I probably didn't change the given hyperparameters. Your problems may be related to machines.

zhangchuanyi96 avatar Sep 02 '22 12:09 zhangchuanyi96

I think the setting of the results given by the authors in the paper is to select the 3 nearest views. Fine-tuning is performed on 16 training views. what do y'all think?

caiyongqi avatar Oct 26 '22 03:10 caiyongqi