Jack Morris

Results 127 comments of Jack Morris

Hi! This seems like a useful feature, and I'm curious how things change as well (I don't think from my experience the hypotheses get monotonically closer to the true embedding...

I think it depends on what you're using the hypotheses for. You could just flatten the output, or only add the hypothesis embedding from the beam that's closest to ground-truth.

Hi @icerooqiu -- we train the whole model end-to-end to generate text conditioned on embeddings. So the MLP layer is updated via gradient descent to try to make the correct...

Thanks; I'll look into it. In the meantime, I've trained a decent inversion model using almost exactly these settings, which is available here: https://huggingface.co/jxm/sentence-transformers_all-MiniLM-L6-v2__msmarco__128

I ran this command and it worked fine for me

I'll get back to you. You can't use it that way though. Basically I haven't trained the expensive corrector model, only have the zero-step inversion model for this specific model,...

Yep, this looks right to me. I think we trained the model for more steps after submission which is why the scores went up a little bit. To get the...

Hi! I took the train and validation sets from DPR (https://arxiv.org/abs/2004.04906 / https://github.com/facebookresearch/DPR). I'll send you a message offline to discuss further.

Oh but I don't think table 2 is decoding from any length longer than training sequences. I train on sequences up to 128 and use those for testing too. I...