multinerf
multinerf copied to clipboard
How to train your data?
Sorry if this is in the documentation, I have a set of jpgs that I have and ran bash scripts/local_colmap_and_resize.sh my_dataset_dir but from here im stumped as to what to do next?
Also does this train 360 equirectangular images or video?
You should be able to run something like
python -m train \
--gin_configs=configs/360.gin \
--gin_bindings="Config.data_dir = 'my_dataset_dir'" \
--gin_bindings="Config.checkpoint_dir = 'my_dataset_dir/checkpoints'" \
--logtostderr
You should be able to run something like
python -m train \ --gin_configs=configs/360.gin \ --gin_bindings="Config.data_dir = 'my_dataset_dir'" \ --gin_bindings="Config.checkpoint_dir = 'my_dataset_dir/checkpoints'" \ --logtostderr
cool and for just normal images? btw this is the 360 camera I want to be testing.. can i just use the 360 video or ? https://www.insta360.com/product/insta360-oners/1inch-360
Sorry if this is in the documentation, I have a set of jpgs that I have and ran
bash scripts/local_colmap_and_resize.sh my_dataset_dirbut from here im stumped as to what to do next?
Same here, I ran "local_colmap_and_resize.sh" (normal images, no 360°) and don't really know what to do next. I am completely new to this and the documentation completely loses me at the end of "using your own data".
Hi, so far we don't support stitched 360 data but you can use fisheye data (the direct output from one half of the 360 camera) if you use the OPENCV_FISHEYE argument to the colmap script.
I just added more explicit instructions here for what to do after running local_colmap_and_resize.sh.