D4RL icon indicating copy to clipboard operation
D4RL copied to clipboard

A collection of reference environments for offline reinforcement learning

Results 104 D4RL issues
Sort by recently updated
recently updated
newest added

### Question May I ask how normalization is performed and what random score and expert score respectively refer to?

Hi i'm trying to use maze2d-umaze-v1 dataset but i have the following error: > Traceback (most recent call last): File "test.py", line 20, in env = gym.make('maze2d-umaze-v1') File "C:\Users\user\miniconda3\envs\d3rlpyEnv2\lib\site-packages\gym\envs\registration.py", line...

I wonder if there is any approach to add stochasticity to Mujoco environment? Thanks!

### Question When I download the offline dataset and want to reproduce the trajectory with the same action sequences and initiate states, the subsequent state sequences (obs &reward) gradually offset...

Hi @justinjfu @aviralkumar2907, I want to train Mujoco-Gym Continuous Control Tasks with Image Observations. I figured out that `image_envs` branch support Image Observations. But when I did `gym.make('hopper-random-vision-v1')` I got...

I wanted to see how may I download the checkpoints (pkl files). I want to run them on different versions of an environment (with perturbations) and gather data for offline...

### Question When I want to Rerun the code of "Conservative Q-Learning for Offline Reinforcement Learning", wo got a problem that "gym.error.NameNotFound: Environment hopper-medium doesn't exist. Did you mean: `bullet-hopper-medium`?...

### Proposal Please could you tag code that you wish your end users to use - indicating what release code is stable. This will ensure that users will not use...

# Description Fix the bug in D4RL that the seed parameter is ignored. - The seed was fixed for AntMaze environments due to [this](https://github.com/Farama-Foundation/D4RL/blob/71a9549f2091accff93eeff68f1f3ab2c0e0a288/d4rl/locomotion/ant.py#L207), causing the seed parameter to be...

### Question Hi, I was wondering if it's possible to add more than one goal or introduce sub-goals in the maze_2d environments ? I'm currently looking at `set_target` func [here](https://github.com/Farama-Foundation/D4RL/blob/71a9549f2091accff93eeff68f1f3ab2c0e0a288/d4rl/pointmaze/maze_model.py#L211)...