neural-editor icon indicating copy to clipboard operation
neural-editor copied to clipboard

Weird output of the edit encoder

Open grll opened this issue 7 years ago • 1 comments

I spotted something strange happening in the edit_model/edit_encoder.py, seq_batch_noise function line 62:

new_values[:, 0, :] = phint*m_expand+ prand*(1-m_expand)

This basically return a noisy version of only one vector (the first one) and all other vector is putted to 0. Instead of every of them as specified in the docstring. This is then propagated to the input of the attention decoder hence making the attention layer of the insert and delete embedding using only the first insert or delete token information.

Is there a reason for this or is it just a mistake ?

grll avatar Oct 16 '18 11:10 grll

@grll I also find some weird code, see my issue

wugh avatar Dec 06 '18 14:12 wugh