OfflineRL-Lib
OfflineRL-Lib copied to clipboard
Benchmarked implementations of Offline RL Algorithms.
OfflineRL-Lib
OfflineRL-Lib provides unofficial and benchmarked PyTorch implementations for selected OfflineRL algorithms, including:
- In-Sample Actor Critic (InAC)
- Extreme Q-Learning (XQL)
- Implicit Q-Learning (IQL)
- Decision Transformer (DT)
- Advantage-Weighted Actor Critic (AWAC)
- TD3-BC
- TD7
For Model-Based algorithms, please check OfflineRL-Kit!
Benchmark Results
- We benchmark and visualize the result via WandB. Click the following WandB links, and group the runs via the entry
task
(for offline experiments) orenv
(for online experiments). - Available Runs
- Offline:
- TD7 :chart_with_upwards_trend:
- XQL :chart_with_upwards_trend:
- InAC :chart_with_upwards_trend:
- AWAC :chart_with_upwards_trend:
- IQL :chart_with_upwards_trend:
- TD3BC :chart_with_upwards_trend:
- Decision Transformer :chart_with_upwards_trend:
- Online Runs
- Offline:
Citing OfflineRL-Lib
If you use OfflineRL-Lib in your work, please use the following bibtex
@misc{offinerllib,
author = {Chenxiao Gao},
title = {OfflineRL-Lib: Benchmarked Implementations of Offline RL Algorithms},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/typoverflow/OfflineRL-Lib}},
}
Acknowledgements
We thank CORL for providing finetuned hyper-parameters.