reinforcement-learning
reinforcement-learning copied to clipboard
Reinforcement learning baseline agent trained with the Actor-critic (A3C) algorithm.
Hi, I'm using trying out this code in windows. I always get this error : ERROR: (localhost:2000) failed to read data: timed out. This is the error trace. runfile('C:/Users/cvaram/Documents/CARLA_0.9.5/PythonAPI/run_RL.py', args='--city-name...
Command: ``` python3 run_RL.py --city-name Town01 --corl-2017 ``` Error: ``` Traceback (most recent call last): File "run_RL.py", line 77, in model_file='agent/trained_model/9600000.h5', n_actions=9, frameskip=1) File "/home/user/projects/carla-0.8.2/PythonClient/reinforcement-learning/agent/runnable_model.py", line 26, in __init__ self.setup_model(self.n_actions,...
When I run python run_RL.py on windows10, an error occurs: ModuleNotFoundError: No module named 'carla.driving_benchmark' Does anyone know how to solve it ?
Hi, could someone tell me how to train A3C on the CARLA simulator? I believe I should use A3CTrainer in a3c.py, but I cannot find where this class is used...
hi, Your work is just so amazing!! And I want to ask two questions: can I use this https://github.com/muupan/async-rl to train the A3C model for RL? By the way, when...
After executing the code and Carla starts, it gives me an error... this is the full log `Traceback (most recent call last): File "run_RL.py", line 89, in args.host, args.port) File...
Hi @felipecode First, I would like to thank you for this amazing work. I had some errors while running the run_RL.py script (I opened an issue carla-simulator#17 that I closed...
Hey , I run the RL benchmark with ``` python run_RL.py --corl-2017 ``` In which my python environment is set by ``` conda create -n carla_rl python=3.6 chainer=1.24.0 cached-property=1.4.2 pillow=5.1.0...
Hi, I want to train an RL agent in Carla. Is there any way to run the simulation faster? I use: carla9.6 ubuntu 18.04 rtx 2080ti Thank you
Thank you @felipecode for sharing reinforcement-learning code and I think the trained result is good. I want to use the train code, so do you have plan to share the...