zipnerf-pytorch
zipnerf-pytorch copied to clipboard
Floating artifacts with custom dataset
Hi, first of all, thank you for the implementation.
I have trained and rendered the model with some existing scenes from the 360 dataset such as bicycle, kitchen, room. The rendering is almost-perfect for each case. However, when it comes to custom datasets, I'm having a lot of ghostly artifacts. Some of the outputs are available here.
Could you please take a look and provide any possible reasons or suggestions? Thank you!
I'm using the following commands for training and rendering:
python train.py \
--gin_configs=configs/360.gin \
--gin_bindings="Config.data_dir = '${DATA_DIR}'" \
--gin_bindings="Config.exp_name = '${EXP_NAME}'" \
--gin_bindings="Config.factor = 4" \
--gin_bindings="Config.batch_size = 8192"
python render.py \
--gin_configs=configs/360.gin \
--gin_bindings="Config.data_dir = '${DATA_DIR}'" \
--gin_bindings="Config.exp_name = '${EXP_NAME}'" \
--gin_bindings="Config.render_path = True" \
--gin_bindings="Config.render_path_frames = 480" \
--gin_bindings="Config.render_video_fps = 120” \
--gin_bindings="Config.factor = 4"
Maybe your dataset is taken in a varying light condition? Try using 360_glo.gin to train it again.
Hi! Thanks for implementation! The same fog. and when the use using 360_glo.gin too.
@alex89607 , @shairatabassum @SuLvXiangXin I was wondering, did you find a solution to the "fog"? I am getting the same especially when the camera get's too close to the object