Zheng Xiong
Zheng Xiong
Hi @erikfrey , Thanks for your help! There is no interaction between the robots, and running multiple scenes in the same environment makes sense to me. But I was wondering...
Thanks! But I noticed that for the actions dimensions of orientations, RT-1 uses axis angle representation for google robot tasks, and roll, pitch, yaw representation for widowx tasks ([link](https://github.com/simpler-env/SimplerEnv/blob/0b13b698138576c449f8853c99b04202e3c04952/simpler_env/policies/rt1/rt1_model.py#L45)). And...
Sorry for the confusion. I mean expert demos for each Simpler env in simulation. I guess directly running a open-x-embodiment action sequence in simpler would just fail due to dynamics...
Thanks for your reply! If I want to run a OXE expert trajectory in a open-loop way in the corresponding `simpler` environment, how should I setup and reset the environment...
I see. Thanks! I wanted to run demos for existing `simpler` environments. I'll try with the first approach you suggest. Thanks!
Hi @xuanlinli17 , thanks a lot for your help! About "adjust the initial object pose to match the demo's first frame" in your previous reply, I was wondering that how...
Thanks for your help! Here is a minimal example code of the transformer module I use: ``` class MlpBlock(nn.Module): """Transformer MLP / feed-forward block.""" mlp_dim: int dtype: Dtype = jnp.float32...
Thanks a lot! A further question is that it seems that only a subset of objects or triplets of objects are used for the pick and move tasks (like [here](https://github.com/simpler-env/ManiSkill2_real2sim/blob/cd45dd27dc6bb26d048cb6570cdab4e3f935cc37/mani_skill2_real2sim/envs/custom_scenes/grasp_single_in_scene.py#L594)...
Thanks! Where can I find a full list of all the possible objects supported in `simpler` for picking and moving tasks then?