pytorch-dual-learning icon indicating copy to clipboard operation
pytorch-dual-learning copied to clipboard

Implementation of Dual Learning NMT on PyTorch

Results 4 pytorch-dual-learning issues
Sort by recently updated
recently updated
newest added

BLEU = 21.39, 49.1/26.8/17.6/12.2 how to get the 21.39 by the last four number?

As training moves on, the reward and loss all become 'nan'. Has this problem existed in your data? A -> B ('[s]', 'Old power means the fossil ##AT##-##AT## nuclear energies...

zhen@zhen-Lenovo:~/pytorch-dual-learning$ ./train-dual.sh Namespace(alpha=0.01, cuda=False, dict=['data/groups/chatbot/dl_data/lm/dict.en.pkl', 'data/groups/chatbot/dl_data/lm/dict.de.pkl'], lm=['data/groups/chatbot/dl_data/lm/wmt16-en.pt', 'data/groups/chatbot/dl_data/lm/wmt16-de.pt'], log_every=5, model=['modelA', 'modelB'], nmt=['data/groups/chatbot/dl_data/wmt16-small/model.wmt16-ende-small.best.bin', 'data/groups/chatbot/dl_data/wmt16-small/model.wmt16-deen-small.best.bin'], save_n_iter=400, src=['data/groups/chatbot/dl_data/wmt16-dual/train-small.en', 'data/groups/chatbot/dl_data/wmt16-dual/train-small.de'], start_iter=0) loading pieces, part A load modelA from [data/groups/chatbot/dl_data/wmt16-small/model.wmt16-ende-small.best.bin] load train_srcA from [data/groups/chatbot/dl_data/wmt16-dual/train-small.en] load...

I run dual.py with GPU 1080Ti and the memory size is 12G. The corpus sizes of nmt and lm are both no more than 50M. But when the step of...