fairseq-zh-en
fairseq-zh-en copied to clipboard
Pretrained model can not be loaded?
[root@localhost fairseq-zh-en]# ./wmt17_generate.sh optimizing fconv for decoding decoding to tmp/wmt17_en_zh/fconv_test /root/torch/install/bin/luajit: .../install/share/lua/5.1/fairseq/models/ensemble_model.lua:134: inconsistent tensor size, expected r_ [10 x 33859], t [10 x 33859] and src [10 x 20490] to have the same number of elements, but got 338590, 338590 and 204900 elements respectively at /root/torch/pkg/torch/lib/TH/generic/THTensorMath.c:887 stack traceback: [C]: in function 'add' .../install/share/lua/5.1/fairseq/models/ensemble_model.lua:134: in function 'generate' ...torch/install/share/lua/5.1/fairseq/scripts/generate.lua:213: in main chunk [C]: in function 'require' ...install/lib/luarocks/rocks/fairseq-cpu/scm-1/bin/fairseq:17: in main chunk [C]: at 0x004064f0 | [zh] Dictionary: 33859 types | [en] Dictionary: 29243 types | IndexedDataset: loaded data-bin/wmt17_en_zh with 2000 examples
Got the same problem. Seems like wmt17.zh-en.fconv-cuda should be a directory containing dict.en.th7, dict.zh.th7 and model.th7, but I just got a file after unzipping it.
@twairball Is there any link where I can get the folder containing the different files rather than the specific zh-en file?
Hi guys sorry if the pretrained is missing something -- i've since moved onto tensor2tensor and pytorch, but the train scripts should work fine.
I should point out that the results uses 80/10/10 split from the single main WMT17 news-commentary corpus, and is not comparable to the conference results that use full train and dev corpus.
Got the same problem. Seems like wmt17.zh-en.fconv-cuda should be a directory containing dict.en.th7, dict.zh.th7 and model.th7, but I just got a file after unzipping it.
Same here also looking for a pretrained model
Hi, I couldn't download pretrained models through the link provided. Could someone share a valid link?
Best,