g2p-seq2seq
g2p-seq2seq copied to clipboard
Confidence values for predictions?
At test time, some of the words could be quite new and radically different from the set of words used for training the model (think foreign/medical words). Also there could be samples in the training set which are mislabelled. Is there a way to identify such cases?
It's maybe a bit irrelevant to this topic but I was wondering if you could report confidences in the beam search result; that's useful there too.