ENeRF icon indicating copy to clipboard operation
ENeRF copied to clipboard

Results on ENeRF-Outdoor dataset and poor quality depth

Open ricshaw opened this issue 2 years ago • 6 comments

Hi, thanks for the great work! However, after running your training script (python train_net.py --cfg_file configs/enerf/enerf_outdoor/actor1.yaml) on Actor1 for 50 epochs, I am getting the following results. The results for the color prediction are not as good as advertised on your project page, with lots of warping of the background. Also, the depth maps are quite poor, with the depth of the shadow region being incorrectly predicted. Do you know why this might be?

actor1_0800_0_800

Color: https://user-images.githubusercontent.com/9107279/219429442-24e2cc1d-bb5b-4d78-9f58-588e318fdbaa.mp4

Depth: https://user-images.githubusercontent.com/9107279/219429583-eccd8139-173f-4a6c-b4c6-26d0e83e5db9.mp4

ricshaw avatar Feb 16 '23 16:02 ricshaw

Hi, thanks for your attention. For the color prediction of the background, I think setting input_views_num to 4 will give better results. (The video in project page is rendered with input_views_num = 4) https://github.com/zju3dv/ENeRF/blob/master/configs/enerf/enerf_outdoor/actor1_path.yaml#L5

For depth prediction for shadows, try setting background rendering mode to foreground images blending. This produces reasonable depth prediction results. However, it will introduce some ghost rendering artifacts near the people.
src_inps=batch['src_inps'] https://github.com/zju3dv/ENeRF/blob/master/lib/networks/enerf/network_composite.py#L139

haotongl avatar Feb 17 '23 02:02 haotongl

Following the instructions, I also found the poor quality of result video of outdoor dataset, after setting input_views_num to 4. The PSNR after 50 epochs is about 27, which is far lower than zjumocap dataset. The modified parameters are as below: 1678241216140 1678241243011

chky1997 avatar Mar 08 '23 02:03 chky1997

The ZJU-Mocap dataset got a high PSNR because it is a simple dataset. ENeRF-Outdoor is more challenging, PSNR of 27 is not low for this dataset. If you produce similar rendering results to the project page, it should be a good indication that your usage are correct.

haotongl avatar Mar 08 '23 02:03 haotongl

Thank you for your explanation! However, my rendering result is far poorer than project page, the quality is basically the same as @ricshaw provided in this issue. I compared the properties of the saved videos, the project page video is 48MB and the video I saved from run.py is only 8MB. Does the video quality difference caused by the video saving process?

https://user-images.githubusercontent.com/62194406/223611717-692d0d1f-2550-48e6-b65e-7c92a137a296.mp4

chky1997 avatar Mar 08 '23 02:03 chky1997

Thanks. I don't think the saving process will affect the quality a lot. I think the quality of this rendering video is approaching the quality of the video in the project page. The main artifacts seem to come from the edges of the picture, which may because there are unseen regions in these views.

There are some differences between the released code and the code for generating rendered video on the project page. The released code uses the bbox generated by visual hull, while the bbox from the rough esimate of the 3D key points of the human body was used before. I will try to release the model and the corresponding bbox to help you fully reproduce the rendering video on the project page.

haotongl avatar Mar 08 '23 03:03 haotongl

Thank you so much for your help!

chky1997 avatar Mar 08 '23 05:03 chky1997