How to render the depth map from SA3D?
How to render the depth map from SA3D?
You may need to change the code in lib/render_utils.py. Here, we convert the depth to colormap. You can save the original depth directly rather than converting it.
How to understand 'aligned'? Theoretically they should be. However, the depth is estimated by NeRF, which may not align with the actual situation. But it should align with the rendered RGB.
3DGS can export depth map. However, 3DGS is represented by a set of 3D Gaussians, which can be regarded as a kind of point cloud. Thus, we can directly export these points as the point cloud rather than use the depth map to estimate one.
Aligned: Each pixel in RGB corresponds one-to-one to each pixel in the depth map. If they are aligned, I can detect the object in RGB and combine it with the depth to get a 3D model.
Yes, they are.
Another question, I want to customize the view of image rendering, which code should I modify? I only need to define four angles.
To define the rendering view manually you need to modify the camera poses, which is not an easy task. You can find the use of the camera matrices here.
The NeRF you use is TensoRF, how can I change it to other models?
In this codebase it is hard to change it, but I have integrated it with 3D-GS. Currently the SA3D-GS branch has some bugs, you can refer to this repo for a fixed version.
I found that after running run.py, there are 120 rendered images. Where are these camera poses saved (rotation and translation matrices)?
We did not save this trace. But you can refer to here for its generation code.