deepbots
deepbots copied to clipboard
A wrapper framework for Reinforcement Learning in the Webots robot simulator using Python 3.
TO DO: - [x] Change comm scheme - [x] Add CUDA support - [x] Add docstrings - [x] Add support for remaining kwargs of ```pygad.GA``` - [x] Add more logging...
[Link to Webots doc](https://cyberbotics.com/doc/reference/robot#field-summary), customData field can be used to implement robot/supervisor communication without receivers/emitters. This can be useful when observation data gathered from robot are big (e.g. medium/high resolution...
[RobotEmitterReceiver](https://github.com/aidudezzz/deepbots/blob/dev/deepbots/robots/controllers/robot_emitter_receiver.py) class should inherit from Webots Robot class **if possible**, similarly to other deepbots classes which inherit from Webots Supervisor class, so that it can access whatever Webots method directly....
I integrate [GoalEnv from gym/core.py](https://github.com/openai/gym/blob/b84b69c872a3159900e6ec82a4b98cfa3e7bb0ed/gym/core.py#L167-L209) with deepbots for Robot-Supervisor scheme. `GoalEnv` also inherite `Env`, but it imposes a required structure on the `observation_space`. 1. `reset(self)`: `self.observation_space` must be a Goal-compatible...
[OpenAI Gym](https://gym.openai.com/) provides several environments to demonstrate the capabilities of RL in different problems. Deepbots goal is to demonstrate capabilities of RL in a 3D, high fidelity simulator such as...
Initially deepbots was developed to support Reinforcement Learning algorithms however we expect that easily can be extended to support Evolutionary Algorithms. When it comes to evolutionary algorithm a population of...
``` if super(Supervisor, self).step(self.timestep) == -1: exit() self.apply_action(action) return ( self.get_observations(), self.get_reward(action), self.is_done(), self.get_info(), ) ``` In RL, it seems to be more natural to apply_action and then Supervisor.step(). Otherwise,...