DSNeRF
DSNeRF copied to clipboard
Scanned depth
In your video you said, that you tested DSNeRF with scanned depth data. How can I train the model with my own depth data. Which format is needed? Thanks in advance.
The input in the paper is sparse 3d/depth. I think they are extending the concept to using dense depths in the video? I guess the challenge is how you gonna acquire the weights for the dense depths,. You would still need compute reprojection error for them. I think even the SOA depth sensors have certain amount of errors.
Wouldn't it be possible to just use a constant weight for all depth points? I think the Redwood dataset used in the paper does not provide confidence/error values either.
Unfortunately, the code in this repo seems to be incomplete and different from the code used in the paper, since the function load_sensor_depth() in load_llff.py is almost the same as load_colmap_depth().
Any update on how to use sensor depth?
Also, I am using ndc space and I have depth values collected from a sensor, should I normalize these values between 0 and 1 or should I convert them into ndc space?
I just want to test whether blender lego depth file works for this frameworks, only 2 view for better novel view generation!
@dunbar12138
I just want to test whether blender lego depth file works for this frameworks, only 2 view for better novel view generation!
Hi guys, is there any progress on raw seneor depth as input An immature idea is to convert the pose and depth image into the colmap bin file format by a script, but I think this is superfluous, rewriting load_sensor_depth may be better, but the changes involved may be more