g2pm icon indicating copy to clipboard operation
g2pm copied to clipboard

Why the count of polys in cedict is larger then that in corpus

Open JohnHerry opened this issue 4 years ago • 2 comments

Hi, I found that the count of poly chars in corpus is 623, while count of poly chars in cedict is over 700, what is the reason? I mean, when we do prediction, the poly in sentense may be not in the set of 623 polys, but in the set of 700+ polys, Then How will the model predict its Pinyin?

JohnHerry avatar Aug 21 '20 06:08 JohnHerry

Hi, as mentioned in the previous issue, our dataset does not cover all possible Chinese polyphonic characters. We collect Chinese sentences from wikipedia and label it, so some of polyphonic characters are missing in our data. The final output of our model is probability distribution of all possible pinyins. But as you point out, the model never see some of polyphonic character during training. So it is highly likely that model fails to predict correct pinyin for such cases. But I believe such cases are really rare.

seanie12 avatar Aug 22 '20 01:08 seanie12

As we tested, the g2pM is not good enough for use in production. Maybe more samples need for CPP dataset.

JohnHerry avatar Aug 25 '20 06:08 JohnHerry