Xuanlin (Simon) Li
Xuanlin (Simon) Li
Same issue here with v0.5.4 (Nov 2, 2022)
``` For the first run, the Success or Early Stop Rate is 0.81. For the second time, the result is 0.84. It seems that the generated seed (using following code)...
Hi, I'd recommend using [ManiSkill2-Learn](https://github.com/haosulab/ManiSkill2-Learn) for ManiSkill2 experiments. The wrappers have been updated to accommodate ManiSkill2. As for your question, the action space is ``` >>> env=gym.make('PickCube-v0', control_mode='pd_ee_delta_pose') >>> env.agent.controller.controllers...
3d delta position = normalized displacement of the joint in 3d space. Normalization is in controller config, default (-1,1)->(-0.1m, 0.1m) 3d delta axis-angle rotation = delta rotation in axis-angle format...
Sorry for the late reply. We currently don't have plans to release the held-out evaluation set yet. They are currently only used for the MS2 challenge.
``` whether is is more efficient to train and simulate on 5 GPUs or whether it is more efficient to separate training and simulation so that I train on 2...
Yes, you need `TStepTransition` and horizon=-1 to sample full trajectories. For dynamic loading, you need to set a capacity > 0. Same logic should still hold, though if I remember...
The environment settings are the same as online DAPG experiments (i.e., for PickCube-v0, "env_cfg.control_mode=pd_ee_delta_pose" "env_cfg.obs_frame=ee" "env_cfg.n_goal_points=50")
Hi ChengYaofeng, we actually have another repo for these SPA baselines, see https://github.com/haosulab/ManiSkill2-SPA (will be available later today or tomorrow) Sorry for not making this repo public earlier.
I believe it's fine to do so. Each GPU will receive its copy of the agent network. We already handled e.g., BatchNorm to SyncBatchNorm conversion, as long as the module...