zhyang

Results 9 comments of zhyang

I manually build a dictionary containing several word pairs for the translation test. The coverage is 100% and the accuracy is 0. Why the accuracy is 0.

language pair is English to Chinese, corpus contains 200w sentences. dictionary only contains five word pairs. I run with the command "python3 eval_translation.py train.en.txt.remBlank.tok.bpe.lf.50.mono.vectors.normalized.mapped train.zh.seg.txt.remBlank.bpe.lf.50.mono.vectors.normalized.mapped -d test_dic"

the test_dict is: word 词语 I 我 you 他 hello 你好 hi 你好 thanks 谢谢 word 词 I 我们 And the mapped embedding is got according to the example in...

@artetxem No, the embeddings are trained in 200w(2000000) sentences. I have expanded the dictionary to 25 words, the accuracy is still 0. Maybe may test dictionary is still too small?

@ artetxem Yes, I am utilizing the nemeral-based initialization and the vocabulary size for our model is 30000. I will test it with a bigger test dictionary. Thank you .

It seems that you made a mistake when you restore the parameters of the discriminator? Have you pre-trained a discriminator?

Maybe you need to set reload=False, whchi ensure that you re-train the discriminator, other than reloads the pre-train discriminator.

Getting the .pkl file is easy. You just need to dump the vocabs used by the generator to .pkl file.

> Got stuck when compiling the fused_kernels when training on multiple nodes. But it works well in a single node. Why? @SefaZeng Same problem, have you fixed this problem?