Edouard Leurent

Results 176 comments of Edouard Leurent

The agent online visualizations can now be saved when logged with the tensorboard writer. The automatic call and save of RunAnalyzer must still be added

Hi, No that is strange, it seems to be working fine for me: ![image](https://user-images.githubusercontent.com/1706935/185741012-7722c2aa-c183-4848-ac68-42ae7ade3e83.png) Could you try to run a separate script where you instantiate and render highway-env manually, to...

Hi, Unfortunately the current software does not currently support this use case, so you will have to modify it a little. Basically, - Normally, for an RL setting, you (the...

- The agent makes the u-turn at the end of the lane because there is no next lane to follow. By default, in this situation, the agent is configured to...

The latest version only supports gym>=0.26. If you're using a previous version of gym, then you should install a previous release of highway-env. gym changed its interface: after version 0.26,...

Yes, exactly. If your agent supports it, you can also call `env.get_available_actions()`, which will exclude the FASTER action when the vehicle is already at maximum speed.

> While creating a multi-agent setting in highway-v0 after modifying it to continue till all agents crash or finish the duration of an episode, when the first agent (observer_vehicle) collides,...

Hi, The default script is `scripts/experiments.py`, you can run it as you like (command line or IDE or other).

This is now implemented with the TupleObservation.

Hey, I think I solved this bug today, would you mind upgrading rl-agents and trying again?