Sylvain Marchienne
Sylvain Marchienne
I am also looking for an example when the target `y` is a multi-output one: ``` [[0 1 0 0 1], [1 0 0 0 1], ...] ``` Any trick?
This has been discussed in #7 and there is not (yet) implementation of it provided by the authors. With my team we have been working on an implementation of various...
We implemented the supervision loss here: https://github.com/sylvainma/Summarizer/blob/b83635f022641765729cc29af4de95bac7f1cdc1/summarizer/models/dsn.py#L101 It would be nice to have the authors confirming this is correct.
To set a bit of context, REINFORCE implementations usually compute a loss `L`, so that once differentiated with autograd it matches the theoretical policy gradient of `J(theta)`. 
@bjameslondon123 did you solve this?