multinerf
multinerf copied to clipboard
Not good reconstructed image when using custom data
Hello, thank you for the awesome work!!:)
My Mip-NeRF 360 training results are not good.
Could you suggest what to do to get good results?
Current Experiment Method
Usage data
- 180 images sampled from a 1-minute video taken in a botanical garden
- Repeatedly moving from side to side for about 5 steps, keeping the camera in the forward direction and shooting while changing only the camera height
- Camera : iphone 12 pro
- Example
Experiment method
Followed multinerf Readme Using your own data. part. 360.gin used. (Does not modify code or config files)
Result
Attempts to solve
- When visualizing the camera pose extracted by colmap, it was confirmed that the camera pose results were similar to my actual camera position.
- Changed the Camera Model when running colmap or tried using "sequential" in Feature matching, but similar results
- The result of changing near/far/forward_facing/batch size/render_camtype/render_dist_percentile in 360.gin is also not good.
I use my own raw data set for training, and the rendered image is not only blurred, but also very strange in color.
Is the "ColorMatrix2" parameter of the json file arranged in the wrong way? Or is the pose calculated by colmap(bash scripts/local_colmap_and_resize.sh ${DATA_DIR}) wrong?What is the problem?
the render image:
the origin image (rgb)(for see):
I have a similar issue did you find any solutions?