Mikel Artetxe
Mikel Artetxe
Thanks for reporting the issue. I would say that the problem is not related to Android Studio but your NDK setup. I work in Linux so unfortunately I cannot help...
The format of the language pair packages is almost the same as that used by Apertium, which is documented at http://wiki.apertium.org/wiki/Language_pair_packages. Some differences: 1. Mitzuli uses a separate package for...
Thanks for your suggestion. I also think that this feature could be quite useful, but there are two main reasons why I haven't implemented it so far: - It is...
Thank you very much for your work! I would be very happy to add your localization to the app, but please note that this would require some commitment from your...
It looks like the error is on greedy decoding, not beam search. Does this happen at training or test time?
The size of our training corpora is as follows: - Spanish: 386 million tokens - French: 749 million tokens - German: 1.606 million tokens - English: 2.238 million tokens What...
That might be too little (in fact, 300k sentences would be very little to train an NMT system even if they were parallel). In any case, the problem might not...
Have a look at #4, it could be that your vocabulary is too large.
It looks like the problem is in the pytorch side. Have a look at pytorch/pytorch#5667.
This obviously looks like an encoding problem, but I would need more details to know where it happens. Please report the full stack trace.