D4RL-Evaluations icon indicating copy to clipboard operation
D4RL-Evaluations copied to clipboard

Results 10 D4RL-Evaluations issues
Sort by recently updated
recently updated
newest added

Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.0.0 to 2.11.1. Release notes Sourced from tensorflow's releases. TensorFlow 2.11.1 Release 2.11.1 Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting...

dependencies

Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 1.14.0 to 2.11.1. Release notes Sourced from tensorflow's releases. TensorFlow 2.11.1 Release 2.11.1 Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting...

dependencies

Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 1.14.0 to 2.11.1. Release notes Sourced from tensorflow's releases. TensorFlow 2.11.1 Release 2.11.1 Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting...

dependencies

Bumps [lxml](https://github.com/lxml/lxml) from 4.4.1 to 4.9.1. Changelog Sourced from lxml's changelog. 4.9.1 (2022-07-01) Bugs fixed A crash was resolved when using iterwalk() (or canonicalize()) after parsing certain incorrect input. Note...

dependencies

I ran the SAC algorithm in BEAR and found that the average return is always negative, I don't know where the problem is

I was trying to run the AWR algorithm on the HalfCheetah environment as given in the README. So, first of all there is no `run.py` code in the folder of...

Hi, I am trying to experiment AWR on a static dataset. But I find that both the codes of AWR in this repo and in the author's repo are the...

I trained a behavior cloning model with the file in BRAC, but the performance is bad. Is that file the one used to get the bc results in the paper?

Hi, I find it irritating that the observations in the maze2d tasks only contain the 2d positions/velocities. If the agent is not informed about the goal location (which can be...

What is the correct way of doing evaluations on the Carla tasks? For example, when I run `python awr/scripts/run_conv.py`, it gives me an enormous number of warnings and NaNs also...