IsaacLab icon indicating copy to clipboard operation
IsaacLab copied to clipboard

[Question] Visualize `Camera` sensor pointcloud on the viewport

Open abizovnuralem opened this issue 1 year ago • 2 comments

Question

Hello Dear Orbit dev team! I was trying to setup the PinholeCameraCfg instead of GridPatternCfg using this demo as a basement for my experiments: ./orbit.sh -p source/standalone/tutorials/04_sensors/add_sensors_on_robot.py. For now, after adding it with other sensors, I get only infinite values and I cannot see any red dots in the sim, can you please give me advice how to get values from it? Thanks!

The config:

sensors

camera = CameraCfg(
    prim_path="{ENV_REGEX_NS}/Robot/base/front_cam",
    update_period=0.1,
    height=480,
    width=640,
    data_types=["rgb", "distance_to_image_plane"],
    spawn=sim_utils.PinholeCameraCfg(
        focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 1.0e5)
    ),
    offset=CameraCfg.OffsetCfg(pos=(0.510, 0.0, 0.015), rot=(0.5, -0.5, 0.5, -0.5), convention="ros"),
)
height_scanner = RayCasterCfg(
    prim_path="{ENV_REGEX_NS}/Robot/base",
    update_period=0.02,
    offset=RayCasterCfg.OffsetCfg(pos=(0.0, 0.0, 20.0)),
    attach_yaw_only=True,
    pattern_cfg=patterns.GridPatternCfg(resolution=0.1, size=[1.6, 1.0], direction=[0.0, 0.0, -1.0]),
    debug_vis=True,
    mesh_prim_paths=["/World/defaultGroundPlane"],
)

lidar = RayCasterCfg(
    prim_path="{ENV_REGEX_NS}/Robot/base",
    offset=RayCasterCfg.OffsetCfg(pos=(10.0, 0.0, 20.0)),
    pattern_cfg=patterns.BpearlPatternCfg(horizontal_fov=360.0, horizontal_res=10.0),
    debug_vis=True,
    mesh_prim_paths=["/World/defaultGroundPlane"],
)


contact_forces = ContactSensorCfg(
    prim_path="{ENV_REGEX_NS}/Robot/.*_FOOT", update_period=0.0, history_length=6, debug_vis=True
)

The way How I get values:

print information from the sensors

    print("-------------------------------")
    print(scene["camera"])
    print("Received shape of rgb   image: ", scene["camera"].data.output["rgb"].shape)
    print("Received shape of depth image: ", scene["camera"].data.output["distance_to_image_plane"].shape)
    print("-------------------------------")
    print(scene["height_scanner"])
    print("Received max height value: ", torch.max(scene["height_scanner"].data.ray_hits_w[..., -1]).item())
    print("-------------------------------")
    print(scene["contact_forces"])
    print("Received max contact force of: ", torch.max(scene["contact_forces"].data.net_forces_w).item())
    print("-------------------------------")
    print(scene["lidar"])
    print("Received max lidar value: ", torch.max(scene["lidar"].data.ray_hits_w[..., -1]).item())

abizovnuralem avatar Apr 15 '24 08:04 abizovnuralem

@pascal-roth Can you help out here?

Mayankm96 avatar Apr 19 '24 07:04 Mayankm96

I revisited this issue. We don't have any debug visualization for the camera sensor. And so, you don't seem to see any pointcloud for it. It should be easy to add this based on the ray-caster-camera itself. We'd be happy to accept an MR for this.

Mayankm96 avatar Jul 29 '24 10:07 Mayankm96