autovc icon indicating copy to clipboard operation
autovc copied to clipboard

question about training loss and inference performance

Open zzw922cn opened this issue 4 years ago • 6 comments

Hi, thank you for your very nice work! I have rerun this project, and it has run 90K steps. the loss_id_psnt is around 0.07. And I tried to feed into a in-domain speaker's melspec and his speaker embedding as source embedding, and another speaker's speaker embedding as target speaker embedding. Then I use GL vocoder to generate the wav, I found the voice is still of the source speaker. Is this normal? When can I perform voice conversion successfully? at what step or what's the loss_id_psnt? thank you very much!!

image

zzw922cn avatar Oct 23 '20 00:10 zzw922cn

You probably need to fine-tune your bottleneck dimensions.

auspicious3000 avatar Oct 23 '20 03:10 auspicious3000

Do you think I should enlarge the bottleneck dimension or decrease the bottleneck dimension?

zzw922cn avatar Oct 23 '20 07:10 zzw922cn

There's detailed information in the paper on how to tune the bottleneck.

auspicious3000 avatar Oct 23 '20 07:10 auspicious3000

OK, thank you~

zzw922cn avatar Oct 23 '20 11:10 zzw922cn

Do you think I should enlarge the bottleneck dimension or decrease the bottleneck dimension?

the paper said: The first model, which we name the “too narrow” model, reduces the dimensions of C1→ and C1← from 32 to 16, and increases the downsampling factor from 32 to 128 (note that higher downsampling factor means lower temporal dimension). The second model, which we name the “too wide” model, increases the dimensions of C1→ and C1← to 256, and decreases the sampling factor to 8, and λ is set to 0

But for new dataset, how to choose the hparams? And wheather we should use DANN idea? Hope to communicate with you~

ruclion avatar Dec 23 '20 08:12 ruclion

Hi, thank you for your very nice work! I have rerun this project, and it has run 90K steps. the loss_id_psnt is around 0.07. And I tried to feed into a in-domain speaker's melspec and his speaker embedding as source embedding, and another speaker's speaker embedding as target speaker embedding. Then I use GL vocoder to generate the wav, I found the voice is still of the source speaker. Is this normal? When can I perform voice conversion successfully? at what step or what's the loss_id_psnt? thank you very much!!

image

@zzw922cn Call you tell me which dataset you used and the batch size of training process ? Thanks in advance !!

innovator1311 avatar Apr 27 '21 09:04 innovator1311