dotaclient
dotaclient copied to clipboard
distributed RL spaghetti al arabiata
DotaClient on K8s
DotaClient is a reinforcement learning system to train RL agents to play Dota 2 through self-play.
- Video: (Youtube) 1v1 self play, 9 Mar 2019, uses fountain for regen!.
- Video: (Youtube) 1v1 self play, 29 jan 2019.
This is built upon the DotaService project, that exposes the game of Dota2 as a (grpc) service for synchronous play.

- Distributed Agents self-playing Dota 2.
- Experience/Model Broker (rmq).
- Distributed Optimizer (PyTorch)
Prerequisites
- Kubeflow's PyTorch Operator
- Kubernetes Cluster (e.g. GKE).
- Build the dota docker image
- Build the dotaservice docker image
- Build the rabbitmq docker image
- Install ksonnet
Launch distributed dota training
cd ks-app
ks show default # Shows the full manifest
ks param list # Lists all parameters
ks apply default # Launches everything you need
Note: A typical job has 40 agents per optimizer. One optimizer does around 1000 steps/s.