ManiSkill icon indicating copy to clipboard operation
ManiSkill copied to clipboard

Replay demonstration using the raytracing rendering

Open Xingyu-Lin opened this issue 1 year ago • 6 comments

Hi,

Thank you for creating this great set of tasks! Are there examples that use the kuafu renderer for replaying the trajectories? Naively replacing the renderer in sapien_env.py results in segmentation fault:

[2022-08-31 14:43:21.660] [kuafu] [warning] Denoiser ON! You must have an NVIDIA GPU with driver version > 470 installed.
[2022-08-31 14:43:22.414] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.414] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.430] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.442] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.442] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.442] [kuafu] [warning] Mesh not triangulated!
Replaying traj_0: : 0step [00:00, ?step/s, control_mode=pd_joint_pos, obs_mode=none][2022-08-31 14:43:22.673] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.673] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.688] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.698] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.698] [kuafu] [warning] Mesh not triangulated!
[2022-08-31 14:43:22.698] [kuafu] [warning] Mesh not triangulated!
/home/xingyu/software/miniconda3/envs/mani_skill2/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
Segmentation fault (core dumped)

Xingyu-Lin avatar Aug 31 '22 21:08 Xingyu-Lin

We are going to have a look at that issue. Please provide us with more details:

  • The environment you are using
  • The command you use to replay the trajectory
  • The hardware information (e.g., GPU, OS system)

Jiayuan-Gu avatar Aug 31 '22 21:08 Jiayuan-Gu

I am able to run examples/demo_manual_control.py with the Kuafu renderer enabled. sapien.Viewer can not be directly used when the Kuafu renderer is used. image

You can modify BaseEnv in mani_skill2/envs/sapien_env.py

    def __init__(
        self,
        obs_mode=None,
        reward_mode=None,
        control_mode=None,
        sim_freq: int = 500,
        control_freq: int = 20,
        device: str = "",
        enable_shadow=False,
        enable_gt_seg=False,
    ):
        # SAPIEN
        self._engine = sapien.Engine()
        # sapien.VulkanRenderer.set_log_level("off")
        # self._renderer = sapien.VulkanRenderer(default_mipmap_levels=1, device=device)
        config = sapien.KuafuConfig()
        self._renderer = sapien.KuafuRenderer(config)
        self._engine.set_renderer(self._renderer)
        self._viewer = None

and temporarily disable depth observation:

def _get_obs_images(
        self, rgb=True, depth=False, visual_seg=False, actor_seg=False
    ) -> OrderedDict:

Jiayuan-Gu avatar Aug 31 '22 23:08 Jiayuan-Gu

Thanks for the quick response. I am running on ubuntu 20 and a 1080 Ti GPU. The command I ran is

python tools/replay_trajectory.py --traj-path demos/rigid_body_envs/TurnFaucet-v0/5007.h5 --save-video

Question: Does Kuafu renderer also support depth rendering?

Xingyu-Lin avatar Aug 31 '22 23:08 Xingyu-Lin

Update: I am able to run the command with kuafu renderer after depth rendering is disabled. It would still be good to know if there are ways to keep the depth rendering with kuafu.

Xingyu-Lin avatar Sep 01 '22 00:09 Xingyu-Lin

Hi @Xingyu-Lin, do you mind explaining your use case? I wonder whether you need depth from active light or rasterization.

Jiayuan-Gu avatar Sep 01 '22 05:09 Jiayuan-Gu

I would like to use the raytracing renderer for testing the perception system with realistic RGB. At the same time, I would like to get reasonable depth reading to obtain point clouds.

One option I see is to get the depth from a rasterzation renderer and separately get the RGB from the raytracing renderer. However, right now the default camera setup seems to be different and the RGB and the depth cannot be aligned. Furthermore, this option would be slow. As such, it would be great to get depth with the raytracing renderer as well.

The depth of the active light from the link you posted seems a bit strange - Is that because the objects are reflective?

Xingyu-Lin avatar Sep 01 '22 17:09 Xingyu-Lin

Overall, Kuafu renderer supports ray-tracing RGB and simulated depth with active light currently. We are investigating adding rasterized depth. Besides, soft-body environments seem not to support Kuafu currently.

One option I see is to get the depth from a rasterization renderer and separately get the RGB from the raytracing renderer.

I think the workaround you suggest is correct. I think the current camera setup should guarantee that the RGB and depth are aligned. If you are only using data offline (perception, offline RL) instead of online (RL), replaying trajectories twice with different renderers is a good solution.

The depth of the active light from the link you posted seems a bit strange - Is that because the objects are reflective?

It uses active light to simulate realistic depth (especially noise patterns) but may need the correct materials to be set. If you want to work with realistic perception, it might be a better choice.

Jiayuan-Gu avatar Sep 01 '22 17:09 Jiayuan-Gu