neroRL
neroRL copied to clipboard
Deep Reinforcement Learning Framework done with PyTorch
neroRL
neroRL is a PyTorch based framework for Deep Reinforcement Learning, which I'm currently developing while pursuing my PhD in this academic field. Its focus is set on environments that are procedurally generated, while providing some usefull tools for experimenting and analyzing a trained behavior. One core feature encompasses recurrent policies
Features
- Environments:
- Obstacle Tower
- Unity ML-Agents
- Procgen
- Gym-Minigrid (Vector (one-hot) or Visual Observations (84x84x3))
- Gym CartPole using masked velocity
- Proximal Policy Optimization
- Discrete and Multi-Discrete Action Spaces
- Vector and Visual Observation Spaces (either alone or simultaneously)
- Recurrent Policies using Truncated Backpropagation Through Time
Obstacle Tower Challenge
Originally, this work started out by achieving the 7th place during the Obstacle Tower Challenge by using a relatively simple FFCNN. This video presents some footage of the approach and the trained behavior:
Recently we published a paper at CoG 2020 (best paper candidate) that analyzes the taken approach. Additionally the model was trained on 3 level designs and was evaluated on the two left out ones.
Getting Started
To get started check out the docs!