cups-rl
cups-rl copied to clipboard
A3C LSTM GA for language grounding
- Added A3C_LSTM_GA model from DeepRL-Grounding by Devendra Chaplot
- Contains cozmo_env PR
- Added checkpointing and experiment IDs of models
- Added tensorboardX which outputs to experiment folder with config+argparse settings
- Added avg_episode_return
- Added random actions on init
- Added 4 natural language tasks with variants specified by config files (NaturalLanguageLookAtObjectTask, NaturalLanguageNavigateToObjectTask, NaturalLanguagePickUpObjectTask, NaturalLanguagePickUpMultipleObjectTask)
- Added test case for 2 natural language tasks
- Added if statements to decide which model to use depending on whether task has natural language or not and therefore chooses A3C or A3C_LSTM_GA
- Added 3 config files and renamed main config to default_config.json
- Created task_utils.py for visualisation, preprocessing and reward functions
- Fixed config overwrite bug
- Fixed many other bugs I've forgotten about (e.g. one test case was broken)
- Renamed to PickUpTask from PickUp
- Added ability to remove lookupdown_actions and moveupdown_actions and put action alone (for pickup tasks but not put needed)
- Added ai2thor_examples.py for previous test cases that were purely just ai2thor run examples
- Added inbuilt_interactive_mode.py for script for exploring ai2thor with unity build path and
.interact()
- Added pdb_interactive_and_check_bbox_reward_functions.py for testing reward functions and printing bboxes using pdb debugging
- VizDoom integration as originally with DeepRL-Grounding
- random scene reset choice and setting from config
- ASCII art figures for architectures within model.py
- Documentation