[Feature Request] Include group-level done fields in the VMAS environment
Motivation
In the current implementation of the VMAS environment, the done fields are only available on the root of the tensordict. However, for training, it is useful to have them in the group-level.
This is also an issue for the TorchRL tutorial scripts multiagent_ppo.py and multiagent_competitive_ddpg.py, which have to manually add the done and terminated at group-level with the right shape
https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/multiagent_ppo.py#L655 https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/multiagent_competitive_ddpg.py#L732
The PettingZoo env implementation does include the group-level done fields.
So, can we also include these fields in the VMAS code? If so, I'd be willing to work on this feature.
Checklist
- [x] I have checked that there is no similar issue in the repo (required)
Tagging @matteobettini, as you also worked on the VmasEnv.
Hey thanks for opening this.
So the idea is that the location of the done keys of a multiagent env will tell you how the env operates. For example:
- in PettingZoo, the done and the rewards are per-agent, so this keys are in the group tds (we also have the global done as this allows to decide when to reset)
- in VMAS, the done is global (thus in the root of the td) and the reward is per-agent (in the subgroups). This is the reason why, before training, we expand the done to match the reward shape and make torch happy
- in SMAC both rewards and done are global (this in the root), so potentially we could avoid expanding them for computations
In general the rationale is that, if some of the keys need to be expanded later, this can be done at the time you need it in a lazy way.
Pre-adding this extra keys and creating new entries in the env tds regardless of whether you will need it could lead to extra computation, due to the fact that torchrl will track a larget done_spec and so on.
Another motivation is: Now, by looking at the td coming from a vmas env, you immediatly know that there is not a per-agent done. If instead we add this, a user would not immediately know that it is just a copy of the global that is useful to have for some training operations.
Let me know what you think
So the idea is that the location of the done keys of a multi-agent env will tell you how the env operates.
That makes sense! And I agree that it is more intuitive this way instead of having the fields everywhere.
However, IMO it is suboptimal to have to manually patch the data so that it is compatible with the trainer. I'd expect the TorchRL environments and the TorchRL trainers to be plug and play compatible.
Or at least to have a default solution for this. I see some options for such a solution:
- Adding an env transform to patch the data (I don't think that such a transform exists?).
- Adding a postproc for the collector to patch the data (IMO the preferred approach).
What do you think of this?
Yeah 2 is the best.
we actually already have that transform in our implementations https://github.com/pytorch/rl/blob/997d90e1b4e497707903801addee64a199f1ce1a/sota-implementations/multiagent/utils/utils.py#L21
We could consider extending it and adding it to the lib!
@vmoens
Oh, this is great! Should we move this transform to the main lib and also use this in the tutorials? I'm open to working on this.
I'm open to it, @matteobettini do you remember why it's not part of the core lib? The doc strings are a bit dull and the name of the transform is a bit too generic to my taste but it'd fit nicely in the lib!
I don’t remember.
Yes, agreed, we need to extend it and test it a bit bit it would be v useful to have
We could also consider having a “multiagent” file in the transforms to avoid clutter
Ofc if the usage is only in MARL settings that makes sense