rl icon indicating copy to clipboard operation
rl copied to clipboard

[BUG] Clarify non-opitionality of `TensordictPrimer`

Open matteobettini opened this issue 1 year ago • 0 comments

Without the primer, the collector does not feed any hidden state to the policy

in the RNN tutorial it is stated that the primer is optional and it is used just to store the hidden states in the buffer.

This is not true in practice. Not adding the primer will result in the collector not feeding the hidden states to the policy during execution. Which will silently cause the rnn to loose any recurrency.

In the tutorial it seems like the only reason the Primer is there is to store hidden states in the buffer (which I would also highly advise against, as this makes any algorithm on-policy and can lead to severe issues).

I think it needs to be changed and remove any claims on optionality of the primer. Instead strong claims about its non-optionality should be made as, if a user removes it, the tutorial will silently run without recurrency.

Reproduce

To reproduce, comment out this line

https://github.com/pytorch/rl/blob/0063741839a3e5e1a527947945494d54f91bc629/tutorials/sphinx-tutorials/dqn_with_rnn.py#L269

and print the policy input at this line

https://github.com/pytorch/rl/blob/0063741839a3e5e1a527947945494d54f91bc629/torchrl/collectors/collectors.py#L733

you will see that no hidden state is fed to the rnn during execution and no errors or warnings are thrown

The future I want to live in

In the future I want to live in there are no primers. The torchrl components are able to look at the policy outputs and carry forward whatever is in "next". The primer for me is a unique pain point particular to torchrl, as users doing recurrency in other libs won't have any equivalent of this and will probably forget it causing major silent bugs

matteobettini avatar Aug 05 '24 05:08 matteobettini