nerf_pl icon indicating copy to clipboard operation
nerf_pl copied to clipboard

Training own data get error

Open CodeFly-123 opened this issue 2 years ago • 1 comments

Hi Thank you for making this repo. I made a 360 deg dataset of 100 pictures by iphone,which frames extracted from video. The ori picture size is (1080,1920). I successfully used Colmap gui and imgs2poses.py to estimate the camera poses for my own dataset. When start training nine epochs,the eval result is not very good. The parameters for a recent experiment are listed below: Branch: master Data: https://drive.google.com/drive/folders/1vDBGXQXosueued3NIhC0nl8ob22D9q7p?usp=sharing Code: python train.py
--dataset_name llff
--root_dir $DATASET_PATH
--N_importance 64 --img_wh 270 480
--num_epochs 60 --batch_size 1024
--optimizer adam --lr 5e-4
--lr_scheduler steplr --decay_step 20 40 --decay_gamma 0.5
--exp_name $EXP_NAME
--spheric_poses

python eval.py
--root_dir $DATASET_PATH
--dataset_name llff --scene_name $SCENE_NAME
--img_wh 270 480 --N_importance 64 --ckpt_path $CKPT_PATH
--spheric_poses

Output train and eval result: 1657176216510 hall1

for this result,I think the reason may be that the data scene is too complex and the way of taking pictures is wrong. but I don't know where to take it from here. Hope to get your answer. Thanks!

CodeFly-123 avatar Jul 07 '22 06:07 CodeFly-123

Hi there, I think one possible reason is that the estimated camera pose is not very accurate. The original NeRF are very sensitive to this. I encountered similar issue for custom llff-style dataset, and I solve this by using a deformable nerf (which tolerate small error of camera pose). My implementation is based on NeRF-pl, and if you are interested you can take a look.

songrise avatar Jul 10 '22 07:07 songrise