luozhouyang

Results 25 comments of luozhouyang
trafficstars

Word embeddings here are actually an 2-d tensor, with shape `(vocab_size, embedding_size)`. This tensor will be updated along with other params by BP.

No special algorithm is used. Not word2vec, not GloVe, just a learnable 2-d matrix.

This implementation is too old. You can implement NMT with tf 2.0 easily. As @mommi84 mentioned, [nmt_with_attention](https://www.tensorflow.org/tutorials/text/nmt_with_attention) is an excellent tutorial. If you want a `Transformer` model, you can find...

add argument `--nums_gpu=1`. https://github.com/tensorflow/nmt/blob/0be864257a76c151eef20ea689755f08bc1faf4e/nmt/nmt.py#L231

Here is a tutorial from tensorflow, using tf 2.x: [nmt_with_attention](https://www.tensorflow.org/tutorials/text/nmt_with_attention)