gnerf
gnerf copied to clipboard
Experiments on DTU and blender dataset: blurry outputs, mode collapse
Thanks for sharing the implementation! This is a pretty interesting work!
I'm trying to reproduce the results on DTU and have several questions:
-
What is the size of images used for the experiments? In config/dtu.yaml, the image size is [500, 400] by default. However, in the datasets.py (https://github.com/quan-meng/gnerf/blob/a008c63dba3a0f7165e912987942c47972759879/dataset/datasets.py#L120), the size is enforced to be proportional to the original size which is 1600x1200 for DTU, thus [500, 400] does not work. Should the size be [400, 300] instead? I set the image size to be [400, 300] in my following experiments.
-
I got pretty blurry synthesis results on DTU, for example, on scan63, after 30K iters, I got the following results,
on scan4, after 30K iters,
-
What are the pose estimation scores (rotation and translation errors) on DTU dataset?
Here is one additional question regarding experiments on the Blender dataset:
It seems there is a mode collapse issue, i.e., the GAN generator generates single-color images (e.g. red/green/white). Restarting the training a couple of times solves this issue. Is this normal in your experiments? I wanna make sure that I'm using the code base in the right way.
Thanks again for sharing the codes of this amazing work!
- We use the preprocessed DTU dataset from the MVSNet repo, so the code is correct
- It shows the intrinsic parameters are not correct
- The pose estimation is not as accurate as COLMAP on the DTU dataset, but you can check the pose status from the tensorboard or evaluate it with the ATE Toolbox
Yes, the gan training sometimes will fail, and it's hard to choose the same parameters for all the scenes