Edresson Casanova

Results 41 comments of Edresson Casanova

Hi @jaketae, You can easily select the device using the "CUDA_VISIBLE_DEVICES=1". Using this only GPU 1 will be visible for the Pytorch and it will be forced to use this...

> @stalevna Hi! Did you manage to replicate the results of the first experiment? Recently, we created a recipe that replicates the first experiment proposed in the YourTTS paper. The...

> Thanks a lot! @Edresson I really appreciate the time you've taken out to answer this query. This will help me a lot, I just had one small query, that...

@TejaswiniiB @etmyhome @mandray @leminhyen2 @e0xextazy @karynaur Recently, we created a recipe that makes everything easier. If you like you can try to fine-tune the model with this recipe. The recipe...

The training procedure for voice conversion and TTS are equal. If you like you can follow the recipe that replicates the first experiment proposed in the YourTTS paper. The recipe...

> @WeberJulian Hi! > > You mentioned that there will be a recipe to train a model on other languages using massively multilingual version of the model trained by you....

@amitli1 Did you find any solution for this? I think the code is available here https://github.com/microsoft/UniSpeech/blob/e3043e2021d49429a406be09b9b8432febcdec73/downstreams/speaker_verification/models/ecapa_tdnn.py but I didn't find any checkpoint for it. A lot of papers are using...

Hi @talipturkmen, The conditioning input reducing factor when using the Perceiver Resampler is just 256 because we extract mel spectrogram with hop size equal to 256, check here: https://github.com/coqui-ai/TTS/blob/dbf1a08a0d4e47fdad6172e433eeb34bc6b13b4e/TTS/tts/layers/xtts/trainer/gpt_trainer.py#L147 Best...

Hi @pranavpawar3 , contributions are always welcome on all Coqui repositories. Feel free to contribute :). @erogol This feature is not already implemented and is it welcome, right?

I got the same issue here. I only was able to build from the source (clone the repo then run `python setup.py install`). `pip install git+https://github.com/HazyResearch/flash-attention` also give me the...