seq2seq
seq2seq copied to clipboard
The in-graph beam search
Hi @dennybritz The in-graph beam search is pretty nice. I have couple of questions. Can you please clarify?
- If we need to save the inference graph for C++ deployment, are the configurations like beam width and length norm weight defined through a placeholder tensor or they have to be baked into the graph?
- What's the inference speed look like on the NMT task (as described in your paper https://arxiv.org/abs/1703.03906, beam 10, K80 GPU)?
- It may be a little inflexible if we want to use additional information like language model score to guide the search. Do you have any comments about that aspect?
Many Thanks!
Hope to get comments about this issue too.
Would like some more info in this too, particularly on guiding the search.
It seems beam search functionality could not be added to inference graph incase if you need to use in C++ or TF Serving. Kindly refer this issue