robosuite
robosuite copied to clipboard
Transformation matrices for multiple camera views
Hi,
I am using Open3D to generate point clouds from RGB-D image observations. I am able to do this for a single viewpoint, but once I try to visualize multiple viewpoints, the visualizations of each viewpoint are oddly flat. I tried using ICP to align the point clouds, but they are misaligned despite hyperparameter tuning. (see the visuals attached to see how the point clouds evolve with more viewpoints + ICP)
Based on the flat visualization of multiple viewpoints, I suspect some of the transformation matrices are wrongly applied. May I clarify if the camera extrinsic matrix projects from camera coordinates to world coordinates, as I am trying to project each individual camera viewpoint to world coordinates and combine the point clouds in world coordinates.
Thank you.