Rui
Rui
Hi Ben, You can check the [project description here](https://github.com/oahziur/animated-archer#problem-formulation). `bestSol` is basically the best solution so far, which is a schedule matrix. `4 1 1 3 5 2 2 5...
@nave01314 Make sure you decoder_cell's number of layer same as the number of states you passing into it. It seems your example has two encoder layers (1 forward and 1...
@liuyujia1991 It should be possible, although I haven't tested it myself.
@Marilena263 If you want to use the `--ckpt` for training, you need to make a small change [here](https://github.com/tensorflow/nmt/blob/20dff79d0bd0e7a286a8063d0194fbc7903bbee3/nmt/train.py#L255). You may want to use the [load_model method](https://github.com/tensorflow/nmt/blob/20dff79d0bd0e7a286a8063d0194fbc7903bbee3/nmt/model_helper.py#L463) for loading from `--ckpt`...
@Vab-jain Does remove the import and replace `lookup_ops.index_to_string_table_from_file` with `tf.contrib.lookup.index_to_string_table_from_file` works for you in tf 1.6?
Hi @ngcabang You can start with the code for generate the image summary [here](https://github.com/tensorflow/nmt/blob/365e7386e6659526f00fa4ad17eefb13d52e3706/nmt/attention_model.py#L174). That will generate a heat map without sentences on the left and top margin. The image...
@ttrouill @nave01314 Actually, I am not sure if the attention image for BeamSearch will be correct out of the box since we traversed the decoding tree in BeamSearchDecoder to get...
@ttrouill I don't know if there is any existing branch or pull request on this, but I think you can follow this [issue](https://github.com/tensorflow/tensorflow/issues/13154) to get update if there is any.
@lzt2015 I think you can implement a Helper class using the [GreedyEmbeddingHelper](https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/contrib/seq2seq/python/ops/helper.py#L491) as a reference. Just replace argmax with argmin should be sufficient.
@rajarsheem Yes, the outputs of the dynamic_decode in NMT codebase is the vocab logits. If you don't give BasicDecoder an output_layer and using GreedyEmbeddingHelper, I think it will use RNN...