AutoRACE-Simulator
                                
                                
                                
                                    AutoRACE-Simulator copied to clipboard
                            
                            
                            
                        A Simulation System for Implementing and Testing Autonomous Racing Algorithms
AutoRACE Simulator
A Simulation System for Implementing and Testing Autonomous Racing Algorithms

SETUP
- 
Install Unity Hub along with Unity 2018.4.24f1 (LTS) or higher.
 - 
Install Python 3.6.1 or higher (Anaconda installation is preferred as we'll be creating a virtual environment shortly).
 - 
Install ML-Agents Unity Package by cloning the latest stable release of Unity ML-Agents Toolkit:
$ git clone --branch release_6 https://github.com/Unity-Technologies/ml-agents.gitNote: For details regarding installation of Unity ML-Agents, please consult the official installation guide.
 - 
Install ML-Agents Python Package (tested version:
mlagents 0.19.0): 
- 
Create a virtual environment (strongly recommended):
$ conda create --name ML-Agents python=3.7 - 
Activate the environment:
$ conda activate ML-Agents - 
Install
mlagentspackage from PyPi (this command also installs the required dependencies):$ pip3 install mlagents 
- Setup the AutoRACE Simulator:
 
- 
Navigate to the Unity ML-Agents Repository directory:
$ cd <path/to/unity-ml-agents/repository> - 
Clone this repository:
$ git clone https://github.com/Tinker-Twins/AutoRACE-Simulator.git - 
Launch Unity Hub and select
ADDproject button. - 
Navigate to the Unity ML-Agents Repository directory and select the parent folder of this repository
AutoRACE-Simulator. 
USAGE
Programming
Every agent needs a script inherited from the Agent class. Following are some of the useful methods:
- 
public override void Initialize()Initializes the environment. Similar to
void Start(). - 
public override void CollectObservations(VectorSensor sensor)Collects observations. Use
sensor.AddObservation(xyz)to add observation "xyz". - 
public override void OnActionReceived(float[] vectorAction)Define the actions to be performed using the passed
vectorAction. Reward function is also defined here. You can useif-elsecases to define rewards/penalties. Don't forget to callEndEpisode()to indicate end of episode. - 
public override void OnEpisodeBegin()This is called when
EndEpisode()is called. Define your "reset" algorithm here before starting the next episode. - 
public override void Heuristic(float[] actionsOut)Use
actionsOut[i]to define manual controls duringHeuristic Onlybehaviour. 
Attach this script to the agent along with BehaviourParameters and DecisionRequester scripts inbuilt with the ML-Agents Unity Package (just search their names in Add Component dropdown menu of the agent gameobject).
Debugging
After defining your logic, test the functionality by selecting Heuristic Only in the Behaviour Type of the BehaviourParameters attached to the agent.
Training
- 
Create a configuration file (
<config>.yaml) to define training parameters. For details, refer the official training configuration guide.Note: A sample configuration file is provided:
Racer.yamlfor training the agent using a hybrid imitation-reinforcement learning architecture. - 
Within the
BehaviourParametersscript attached to the agent, give a uniqueBehaviour Namefor training purpose. - 
Activate the
ML-Agentsenvironment:$ conda activate ML-Agents - 
Navigate to the Unity ML-Agents Repository directory:
$ cd <path/to/unity-ml-agents/repository> - 
Start the training.
$ mlagents-learn <path/to/config>.yaml --run-id=<Run1> - 
Hit the
Playbutton in Unity Editor to "actually" start the training. 
Training Analysis
- 
Navigate to the Unity ML-Agents Repository directory:
$ cd <path/to/unity-ml-agents/repository> - 
Launch TensorBoard to analyze the training results:
$ tensorboard --logdir results - 
Open browser application (tested with Google Chrome) and log on to http://localhost:6006 to view the training results.
 
Deployment
- 
Navigate to the Unity ML-Agents Repository directory and locate a folder called
results. - 
Open the
resultsfolder and locate a folder named after the<training_behaviour_name>that you used while training the agent(s). - 
Copy the saved neural network model(s) (with
.nnextension) into theNN Modelsfolder of theAutoRACE SimulatorUnity Project. - 
In the inspector window, attach respective NN model(s) to the
Modelvariable in theBehaviourParametersscript attached to the agent(s). - 
Select
Inference Onlyin theBehaviour Typeof theBehaviourParametersattached to the agent(s). - 
Hit the play button in Unity Editor and watch your agent(s) play!
 
IMPORTANT TIPS
- 
Craft the reward function carefully; agents cheat a lot!
 - 
Tune the training parameters in
<config>.yaml file. - 
As long as possible, duplicate the training arenas within the scene to ensure parallel (faster) training.
Note: Make sure to commit changes (if any) to all the duplicates as well!
 
DEMO
Implementation demonstrations are available on YouTube.
CITATION
We encourage you to cite the following paper if you use any part of this project for your research:
@eprint{AutoRACE-2021,
    doi = {10.48550/ARXIV.2110.05437},
    url = {https://arxiv.org/abs/2110.05437},
    author = {Samak, Chinmay Vilas and Samak, Tanmay Vilas and Kandhasamy, Sivanathan},
    keywords = {Robotics (cs.RO), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Neural and Evolutionary Computing (cs.NE), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Autonomous Racing using a Hybrid Imitation-Reinforcement Learning Architecture},
    publisher = {arXiv},
    year = {2021},
    copyright = {arXiv.org perpetual, non-exclusive license}
}