szhang963
szhang963
@szhang963 I changed the 'render_image' function in models.py. ``` def render_image(render_fn, rays, rank, chunk=8192): is_semantic = render_fn.func.module.semantic n_devices = torch.cuda.device_count() # one device for render height, width = rays[0].shape[:2] num_rays...
@ZiYang-xie Hi, Could you please spare some time to assist me with this matter? I would greatly appreciate your help.
@amoghskanda No, the problem of rendering needs to be solved first.
What causes it? The inconsistent intrinsic parameters of the multiple cameras?
@yunzhiy It is excellent for the result of multiple cameras. I'm looking forward to the release of the code.
@Sugar55888 hi, did you try depth supervision from lidar points?
@Sugar55888 Hi, thanks for your reply. Could you please tell me how to add lidar depth supervision in the splatfacto for street gaussian? I use depth supervision (L1 loss) between...
@altaykacan Thanks for your help. I reconstruct the lidar point cloud and then I get merged lidar points and per-frame poses without using colmap. - Therefore, the depth map is...
> This is because your world coordinate is not aligning with the road surface. > > In Waymo's data process, we use the vehicle coordinate (red circle in the figure)...
Hi, I have aligned the ground with the world coordination. However, I have a new question about the rendering of depth for 3d assets. It causes the asset not to...