AmazingJ
AmazingJ
You only need to use these two commands. subword-nmt learn-bpe -s {num_operations} < {train_file} > {codes_file} subword-nmt apply-bpe -c {codes_file} < {test_file} > {out_file} On 09/26/2019 22:15, Will wrote: train_source+train_target...
We use BLEU-4 score by default, of course. We used the muti-bleu tool. First, make sure you are doing the right BPE operation. subword-nmt learn-bpe -s {num_operations} < {train_file} >...
Because the source and the target share the vocabulary and after doing BPE, the vocabulary size should be around the BPE operands without much impact. I assume that you are...
Aha,I see. This is a bug in our code running multi-GPU. There's a " def signal_handler() " function in the " train.py " that you need to change to "...
Our baseline input could be the same linearized amr chart as konstas. Only concept nodes are retained for input to the transformer model. -train_src # concept node sequence -train_structure1 #...
但是我看生成的样本,虽然验证集 BLEU8 点几,但是测试集生成存在好多重复生成的问题