FastSpeech2
FastSpeech2 copied to clipboard
Unused character embeddings?
I noticed that since each utterance is converted to a phoneme sequence, the character embeddings are never used. A quick visualization with 2D PCA shows that the embeddings corresponding to A-Z and a-z seem random, while the phoneme embeddings have a meaningful structure. Is that intentional?
Hi,sorry to bother you.I'm a student majored in Linguistic and NLP. Recently i had confront a challenge about extracting feature vectors in phonemes and characters. I would like to know if there is a pre-training model to get the feature vector for each phoneme and character.It seems that the "PAC of TTS Phonemes/Characters" you provided is close to my requirements! I would like to know if you have any suggestions for this.
@wabmhnsbn the visualizations above are 2D projections (with PCA) of the trainable 256D embeddings, defined in the model's encoder. I just accessed them with model.encoder.src_word_emb.weight
. Note that the model has to be trained otherwise they will be random.
@wabmhnsbn the visualizations above are 2D projections (with PCA) of the trainable 256D embeddings, defined in the model's encoder. I just accessed them with
model.encoder.src_word_emb.weight
. Note that the model has to be trained otherwise they will be random.
Okay, I'll give it a try. Thank you!
Great question! I have the same.