Sean Robertson

Results 34 comments of Sean Robertson

Also seeing this specifically with PostCSS + SugarSS

I put a first version of the batched model at https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation-batched.ipynb via 31fdb61387e62948f6a24dc9a2dadd6d3221a73c The biggest changes are using `pack_padded_sequence` before the encoder RNN and `pad_packed_sequence` after it, and the [masked...

The paper mentions an output layer $g$ with those arguments after the RNN state $s_i$. I found that adding the (just calculated) context $c_i$ as another input to that output...

Thanks, that does look better. The Shakespeare tutorial isn't complete yet - when it is, can I ping you to add it in (or show me how)?

It's in requirements.txt for pytorch/text, but looks like I skipped that installation step in the tutorial.

As with #98 you can remove all lines referencing sconce and job. Will update the file itself when I do the 0.4 update.

This won't be a very satisfying answer, but I believe the reason is just that this is left over from a non-bidirectional encoder, and this slicing was a workaround to...

Are you comparing to the latest implementation, at https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb ?

The simplest way is to concatenate features into a single input vector. However this only works if your RNN takes vector input, not discrete inputs (LongTensor) through an embedding layer....

For the initializer you would need to add arguments for your feature sizes, and create a new Embedding layer for each discrete feature. In the forward() method you would add...