Open3D
Open3D copied to clipboard
Blocky / incomplete raycast output
Checklist
- [X] I have searched for similar issues.
- [X] For Python issues, I have tested with the latest development wheel.
- [X] I have checked the release documentation and the latest documentation (for
master
branch).
Describe the issue
I'm working on code derived from the new SLAM reconstruction model and so far it works very well and is very fast, with the exception of situations in which the raycasting to generate synthetic frames produces bad output. This always seems to be the case for larger voxel grid sizes -- say, .010 but I have seen it even on .002 voxel grid sizes.
.01 output
The artifacting is worse the larger the voxel grid is.
Raycast:
Pointcloud:
Frustrum (plotted voxels from -.5 to .5 tsdf, same frustrum given to raycast)
Raycast superimposed on PC:
Raycast superimposed over frustrum
Note that my tooling regenerates everything to turn on frustrum visibility; each call to raycast generates different splotchy output seemingly at random
.003 output
Same artifacts at .003 on same dataset
Things I've tried:
- Changing weight threshold, trunc_voxel_multiplier, range_map_down_factor on raycast call. No setting there changes this behavior.
- Controlling trunc_voxel_multiplier for frustrum generation independently. This really didn't do anything either
- Running raycast on CPU like so:
hashap : o3d.core.HashMap = voxel_grid.hashmap()
hashap.cpu()
raycast_frame = voxel_grid.ray_cast(block_coords=frustum_block_coords_t,
intrinsic = self.intrinsics,
extrinsic = extrinsics,
width = self.input_frame.width(),
height= self.input_frame.height(),
render_attributes=attrs,
depth_scale = 1000.0,
depth_min = config.depth_min,
depth_max = config.depth_max,
weight_threshold=weight_thresh,
trunc_voxel_multiplier=config.trunc_voxel_render, range_map_down_factor=8)
hashap.cuda()
This didnt change the behavior; which implies it's not a GPU issue although it is sorta hard to verify externally whether that is truly running on CPU or not.
Steps to reproduce the bug
I've seen this issue with a custom dataset using the instructions here: http://www.open3d.org/html/tutorial/t_reconstruction_system/index.html. Currently I'm running more tailored python code than that. Here is the npz of the voxel grid from the screenshots -- https://drive.google.com/file/d/1HGRqOTGA2bYpCVzTQFKZKI9hjTOhTG6X/view?usp=sharing -- and I'd expect raytracing to have the same artifacts on that.
Error message
No response
Expected behavior
I'd expect the raytracing to mostly look like the generated pointcloud from the given perspective assuming generation inputs -- weight threshold, voxelgrid, etc -- being the same.
Open3D, Python and System information
- Operating system: Ubuntu 20.04
- Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
- Open3D version: 0.15.2 and 0.15.2+99c361c
- System architecture: x86
- Is this a remote workstation?: no, but it is running in a docker
- How did you install Open3D?: pip
Additional information
No response
To make it easier to test:
#include "open3d/Open3D.h"
#include "open3d/t/geometry/Image.h"
using namespace open3d;
using core::Tensor;
using t::geometry::PointCloud;
using t::geometry::RGBDImage;
int main(int argc, char *argv[]) {
auto voxel_grid = t::geometry::VoxelBlockGrid::Load(argv[1]);
auto intrinsic_t = Tensor::Init<double>(
{{1537.13, 0.0, 962.311},
{0, 1537.66, 585.374},
{0, 0, 1}});
auto extrinsics_t = Tensor::Init<double>(
{{-0.888227, -0.446662, 0.107447, 0.00176051,},
{-0.238145, 0.647672, 0.723746, 0.0136757},
{-0.39286, 0.617263, -0.68165, 0.865006},
{0, 0, 0, 1}});
auto input_depth_tensor = Tensor::Load(argv[2]);
core::Device device("CUDA:0");
auto empty_depth = std::make_shared<open3d::t::geometry::Image>(input_depth_tensor);
auto block_coords = voxel_grid.GetUniqueBlockCoordinates(empty_depth->To(device), intrinsic_t, extrinsics_t);
std::vector<std::string> attrs = {"depth", "color"};
auto rgbd_map = voxel_grid.RayCast(block_coords, intrinsic_t, extrinsics_t,
1920,
1080,
attrs,
1000.0,
.1,
3.0,
0, 8, 8);
auto depth = open3d::t::geometry::Image(rgbd_map["depth"]).ToLegacy();
auto color = open3d::t::geometry::Image(rgbd_map["color"]).ToLegacy();
auto rgbd = geometry::RGBDImage::CreateFromColorAndDepth(color, depth, 1000, 3, true);
visualization::DrawGeometries({rgbd}, "RGBD", 1920 * 2,
1080);
}
Ran with the file path as args for: https://drive.google.com/file/d/1HGRqOTGA2bYpCVzTQFKZKI9hjTOhTG6X/view?usp=sharing https://drive.google.com/file/d/1xDjLWyWqV2IJpPUxOtZLXlSZohXMRBaG/view?usp=sharing
respecively produces this:
Since I had it now in C++ I hacked it abit to make sure it was running on CPU, and the python attempt was not. But when running (both frustrum and raycast) on CPU I get:
which is likely good enough for tracking although it does have a few obvious blocks missing.
glxinfo snippet:
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 16384 MB
Total available memory: 16384 MB
Currently available dedicated video memory: 10897 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA GeForce RTX 3080 Laptop GPU/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 510.73.05
OpenGL core profile shading language version string: 4.60 NVIDIA
Hi I'm facing a similar issue. Is there any solution? @jdavidberger
@nfrankisrg Hi I'm facing a similar issue with my own data. Have you solved your problem?
The fix for this was merged in a while back. If you are running a recent version of Open3D it might be a new bug/issue