rllib-integration icon indicating copy to clipboard operation
rllib-integration copied to clipboard

Integration of RLLib and CARLA

Results 6 rllib-integration issues
Sort by recently updated
recently updated
newest added

Hi, I want to run the A3C algorithm. Can I directly set the number workers>0? Do I need a vectorized environment?

My environment is 2080 ti GPU, i9 CPU, 64G Rom, NVIDIA-SMI 470.161.03, Driver Version: 470.161.03, CUDA Version: 11.4. After starting CARLA 0.9.11, I run "python3 dqn_train.py dqn_example/dqn_config.yaml --name dqn" and...

### Problem _ObjIdx_ and _ObjTag_ returning 0 for all points for lidar semantic sensor ### Solution Changing how the input is read in order to give each output variable in...

Is it possible to extend this to support multi-agent reinforcement learning? If so, I would appreciate it if you could create a boilerplate.

Hello, do you think it is possible to run the rllib integration in docker? My idea of the process so far has been: - creating a Dockerfile - From carlasim/carla:0.9.11...