RLBench
RLBench copied to clipboard
A large-scale benchmark and learning environment.
If you create demos with rendering say, at 256x256 resolution, and then load them using https://github.com/stepjam/RLBench/blob/master/rlbench/utils.py#L41 with CameraConfigs with different sizes (i.e. 128x128), then all images (including RGB + Depth)...
@stepjam Hello, thank you for providing such a good benchmark environment. I tried to learn to use rlbench, but when I executed example(any of them), I found that the robotic...
Hi, thanks for your splendid work. I would like to modify the specific obj's part texture, such as the buttons/handle on the oven. From [this line](https://github.com/stepjam/PyRep/blob/076ca15c57f2495a4194da03565891ab1aaa317e/pyrep/objects/shape.py#L408) in pyrep, it seems...
I've been using the dataset_generator.py script to generate demos for RLBench tasks, which is great. However, when I try to load them, I find that the segmentation mask handles are...
Hi,stepjam,I have an issue of saving the simulation ttm file, the prompt of task builder thrown error as following: pyrep.errors.ObjectIsNotModelError: Object '%s' is not a model. Use 'set_model(True)' to convert....
Hi,I want to produce some RLBench data for my current research, I run this file to only get some png images, why no action data?I think most imitation learning tasks...
Hi, I think you need to produce a more detailed description for your data, such as the range of data, what is the maximum and minimum value? This is necessary...
https://github.com/stepjam/RLBench/blob/46cb9d97003961428f491dd7cc75433f6ab351dd/rlbench/gym/rlbench_env.py#L50-L63 Although the original RLBench environment includes depth images and point clouds in the observation, the gym version does not provide such information. They are important if researchers want to...
Hi, Thank you for your wonderful implementation. When I evaluate task 'open_fridge', I notice that even after calling the [reset_to_demo](https://github.com/stepjam/RLBench/blob/master/rlbench/task_environment.py#L164) function, the initial object pose configurations sometimes (although not always)...
At [gripper_action_modes.py line 61](https://github.com/stepjam/RLBench/blob/55e723d9400b54d37e6077ea2753e64874bc4639/rlbench/action_modes/gripper_action_modes.py#L61C34-L61C34), the compound inequality mismatches the assertion: ``` if 0.0 > action[0] > 1.0: # which is equivalent to 0.0 > action[0] AND action[0] > 1.0 raise...