g2pm
g2pm copied to clipboard
Why the count of polys in cedict is larger then that in corpus
Hi, I found that the count of poly chars in corpus is 623, while count of poly chars in cedict is over 700, what is the reason? I mean, when we do prediction, the poly in sentense may be not in the set of 623 polys, but in the set of 700+ polys, Then How will the model predict its Pinyin?
Hi, as mentioned in the previous issue, our dataset does not cover all possible Chinese polyphonic characters. We collect Chinese sentences from wikipedia and label it, so some of polyphonic characters are missing in our data. The final output of our model is probability distribution of all possible pinyins. But as you point out, the model never see some of polyphonic character during training. So it is highly likely that model fails to predict correct pinyin for such cases. But I believe such cases are really rare.
As we tested, the g2pM is not good enough for use in production. Maybe more samples need for CPP dataset.