Results 23 comments of Anatoly Vostryakov

From my point of view -1 is usual python way to take the last element. I suggest to support it.

Moreover, I work on Mac OS X 64 bit. If I write above: input_var = cgt.matrix('input', dtype=np.int32) As result, I'll have a following error in this line: output = w_glove[cgt.flatten(input_var)]:...

Oo, great! Very thanks again for the Tatoeba-Challenge project! Recently you published a Spanish-to-English and other models that we need! By the way, about the pre-processing step for OPUS datasets....

>> Is it a big and frequent problem? No, It's seldom. And If I remove "!" from the end of sentences it disappears from English output. I let you know...

@jorgtied We didn't find this problem with other models that we use from Helsinki-NLP/Tatoeba-Challenge projects: por-eng, fr-en, spa-eng, fi-en, da-en, nl-en, so-en, no-en, el-en (maybe nn-en/nb-en, I don't remember exactly...

Ok, I found that you created two SentencePiece models for source and target languages. And vocab file you created with marian-vocab utility on a concatenation of source and target texts....

Yes, it's clear enough. Especially for really different languages. I just thinking about one use case with Glossary when the combined SentencePiece model can be helpful. Imagine a situation when...

Sometimes input sentence include two languages inside naturally.

Is Semantic Role Labeling so difficult task to solve with appropriate quality? Just curious why it took so much time to release?

Good to hear that.