quad-swarm-rl icon indicating copy to clipboard operation
quad-swarm-rl copied to clipboard

Additional environments compatible with OpenAI gym

Results 9 quad-swarm-rl issues
Sort by recently updated
recently updated
newest added

I trained a small (deepsets) neural network as suggested in [this paper](https://arxiv.org/abs/2109.07735), and implemented the c code for calculating the output for a given input. The code should run on...

I let [sim2real.py](https://github.com/Zhehui-Huang/quad-swarm-rl/blob/master/swarm_rl/sim2real/sim2real.py) create the c code for network evaluation, however I am a bit confused about the calculation. When I run the python script, the following c code is...

Fixed issue #91, please refer to issue number 91 for more details.

Hi, I have come across with the error code of "enjoy.py: unrecognized arguments: --qudas_render=True" when I executed the following command. python -m swarm_rl.enjoy --algo=APPO --env=quadrotor_multi --replay_buffer_sample_prob=0 --quads_use_numba=False --qudas_render=True --train_dir=PATH_TO_TRAIN_DIR --experiment=EXPERIMENT_NAME...

When i visualize the simulation, i meet the follow problems of Pyglet. How can i do it? IMPORTING OPENGL RENDERING MODULE. THIS SHOULD NOT BE IMPORTED IN HEADLESS MODE! Traceback...

Excuse me, how to visualize the simulation in the simulation environment?

Hello, can you tell me how to train the scene of a big ball hitting a cluster in the third picture, through which part of the training? The scenario I...

Hi! Thanks for your great work and contribution to open-source community! I noticed that there's no information related to goal in observation, how can agent get the goal info? is...