MARL_CAVs icon indicating copy to clipboard operation
MARL_CAVs copied to clipboard

Multi-agent Reinforcement Learning for Autonomous Vehicles

Results 14 MARL_CAVs issues
Sort by recently updated
recently updated
newest added

Hello, I found that the 'eval_rewards.npy' file was not generated in the 'results' folder when I drew the comparison curve using the 'plot_benchmark_saft. py' program. The error is as follows....

Hello! Thank you for your great job first! I'm sorry to bother you, but I find it quite slow when training. So I want to use SubproVecEnv to build multiple...

where is Safety Supervisor ,I haven't seen Google Cloud Drive either

Hi! @DongChen06 Thanks for your awesome work! I'm trying to reproduce the results using MAA2C. The average reward remains rather fluctuating after 5,000 episodes, ranging from -5.82 at Episode 5,000...

Greetings, I hope you are doing well and I want to say props for the great work. I am writing to ask a question about the code which I can...

Hi! Thank you for your great job! Recently I want to add the action mask module(just change the actor network in model_common.py as the network in model.py ) and Safety...

Hello! After I added the following log message to the training loop of the train() function in run_ma2c.py: ```python while ma2c.n_episodes < MAX_EPISODES: ma2c.explore() log_message(f"explore times :{ma2c.n_episodes + 1}") #...

Hello! Currently, the simulation and emulation process involves fixed durations and numbers of vehicles, with each episode restarting from zero after it ends. Has the author considered fixing only the...

Hello! I noticed that the maximum eposides can be controlled by MAX_EPISODES during training, and EVAL_INTERVAL determines the evaluation intervals; however, the evaluation process seems to determine the number of...