transformers-into-vaes icon indicating copy to clipboard operation
transformers-into-vaes copied to clipboard

passing projected latent space

Open safakkbilici opened this issue 3 years ago • 2 comments

Thanks for this nice work and reproducible code. If I understood your approach well, I think I am missing the code line that you passed latent space z to decoder as encoder_hidden_states. I will be glad if you point that line. Thanks.

safakkbilici avatar Aug 08 '21 20:08 safakkbilici

Hello. Latent z is passed as past_key_values instead of encoder_hidden_states of the decoder:

https://github.com/seongminp/transformers-into-vaes/blob/16205c8da8731b0097d80eeca219a878e0397beb/vendor_t5.py#L134

I think encoder_hidden_states is a more natural choice, but using past_key_values required fewer modifications of transformers library. More importantly the OPTIMUS paper uses past_key_values so that's why I went with that.

z is rearranged before being fed into the decoder here: https://github.com/seongminp/transformers-into-vaes/blob/16205c8da8731b0097d80eeca219a878e0397beb/vendor_t5.py#L226

seongminp avatar Aug 08 '21 23:08 seongminp

Oh I got it. Thank you for your kind response.

safakkbilici avatar Aug 09 '21 07:08 safakkbilici