AudioDec
AudioDec copied to clipboard
Is it missing some activation functions between some layers?
Thanks for your work. I have trained model in my own dataset. I met same question as ISSUE7. When I checked the model, I found some difference in AutoEncoder:
- Before encoder_output is feeded into Projector, is an activation function needed?
- Before ConvTranspose1d, is an activation function needed?
- add tanh activation function in Decoder final out?
In other popular implementations, they all added those. So I add those:
- add an activation function before https://github.com/facebookresearch/AudioDec/blob/9b498385890b38de048f2db535c2fbf8cbeea80b/models/autoencoder/modules/projector.py#L50
- add an activation function before https://github.com/facebookresearch/AudioDec/blob/9b498385890b38de048f2db535c2fbf8cbeea80b/models/autoencoder/modules/decoder.py#L62
- add an activation function before https://github.com/facebookresearch/AudioDec/blob/9b498385890b38de048f2db535c2fbf8cbeea80b/models/autoencoder/modules/decoder.py#L120
- add a tanh() after https://github.com/facebookresearch/AudioDec/blob/9b498385890b38de048f2db535c2fbf8cbeea80b/models/autoencoder/modules/decoder.py#L120
When I added those and trained again, I got some improvement in unseen datasets than your base when I only trained AutoEncoder with discriminators and don't finetune it with AudioDec.
BTW, I trained model only with Librispeech and AIShell with 16K sampling_rate and tested model by another clean TTS dataset with training 160K steps. When my model is finished(total 800k), I will compare final results, upload some demos and share my training config.
demo with 160k steps demo.zip
Hi, Thanks for the interesting experiments! I think it is reasonable to add more nonlinearity to the model to enhance its modeling ability once the training is still stable. If you have more detailed results in any form (demo page, paper, etc.), please feel free to share them with us and I will update the README to show that adding activation functions will improve the robustness to unseen data.
@bigpon I confirmed changing autoencoder model can improve results. I changed as follows:
- it is very important to add activation functions as I said above(highly recommended). In the Encodec paper or SoundStream paper they all add(I guess they all borrowed from MelGAN). I use Snake activation function not ELU or LeakyReLU
- it is very important to add WeightNorm Layer, which can ensure training stability and model results significantly(highly recommended).
- Appropriately increasing code_dim and model size can improve audio reconstruction quality(lower melloss about 15.3 in my version)(recommended code_dim=128 although I use 256)
- I use noncausal training mode, MPD + ComplexMRD as discriminators, MultiMelLoss, trained by AdamW and ExponentialLR
- BTW, there are some errors in your MRD because intermediate convolution outputs are missed and can't be computed feature loss. https://github.com/facebookresearch/AudioDec/blob/9b498385890b38de048f2db535c2fbf8cbeea80b/models/vocoder/modules/discriminator.py#L568 and missing padding for each Conv2d Layer https://github.com/facebookresearch/AudioDec/blob/9b498385890b38de048f2db535c2fbf8cbeea80b/models/vocoder/modules/discriminator.py#L511
Here is my training config.yaml symAD_librispeech_16000_hop160_base.txt
demos for new config with training 200K steps by using librispeech and aishell datasets, but testing on an unseen dataset
Hi @BridgetteSong,
- Thanks for the great efforts of investigation! I will check the results of 48kHz VCTK corpus.
- Do you have any plan to write a paper about your findings? If you write any paper, please inform me, and I will add the info to the README for others' reference.
- You are correct. The MRD actually has these problems, and I will fix them.
- Where do you put the WeightNorm layers?
- Could you also provide the results of the original AudioDec for references? (I assume the audiodec results in demo.zip are the modified version, right?)
- According to your conclusion, will these modifications increase the quality for arbitrary dataset? or the robustness of unseen dataset? Since you train and test the model using libritts and aishell, I assume that these modifications will increase the reconstruction quality for seen data, right?
@bigpon
-
I don't have the idea of writing a paper yet, but I'm interested in improving the effect of Encodec.
-
I add WeightNorm on each Conv1d layer like this: https://github.com/facebookresearch/AudioDec/blob/9b498385890b38de048f2db535c2fbf8cbeea80b/layers/conv_layer.py#L46
self.conv = torch.nn.utils.weight_norm(nn.Conv1d(**))I think stability of AutoEncoder is very important to train two stages or just only train one stage same as mine.
BTW, I see WeightNorm are added in 2nd stage by default using apply() function. But I can't confirm Whether the WeightNorm initialization of ResidualBlock is successful by using apply() function. So I directly use
self.conv = torch.nn.utils.weight_norm(nn.Conv1d(**))as above. -
I trained model by libritts and aishell dataset, but test on an unseen dataset(audios in demo.zip are unseen which are from another TTS dataset, even includes a singing demo), so these modifications can increase the quality for arbitrary dataset
-
I can't provide the results of the original AudioDec because the model is changed, the demo.zip is the modified version. but demo.zip hifi dir contains of the 16k and 24k original audios, if someone has trained an original AudioDec model, just use real audios in demo.zip to test.
Hi, Thanks for your investigation!
According to our internal experiments, we get some conclusions.
- Adding more activation functions like HiFiGAN will slightly increase the unseen data robustness. However, it is very similar to our 2-stage approach, which already used HiFiGAN as the decoder.
- The snake activation doesn’t show marked improvements over the ELU activation. In some cases, the snake activation even achieves much worse speech quality. We think the instability of the snake activation might cause the problem.
- Instand of adding activations, we found that increasing the bitrate to a reasonable scale (ex: 24kbps as Opus) will significantly improve the unseen data robustness, which somehow makes sense since it reduces the modeling difficulties. However, the very low bitrate feature is essential for some temporal-sensitive tasks such as LLM-based speech generation. Therefore, without greatly changing the architecture, adopting more training data will be a compromise. (We are investigating a new architecture for unseen data robustness and hope to release it soon.)
On the other hand, the 2D conv padding issue of the MSTFT discriminator has been fixed, and the corresponding models have been updated. Thanks for your contributions again.
- The snake activation doesn’t show marked improvements over the ELU activation. In some cases, the snake activation even achieves much worse speech quality. We think the instability of the snake activation might cause the problem.
@bigpon Hi bigpon, I found that someone published a paper based on the soundstream framework, which mentioned the benefits of snake activation function as an innovation point, but this is not consistent with your conclusion, so I don't know what went wrong
According to DAC, they claimed that snake is much better. However, in AudioDec architecture, we didn't find the tendency. Two possible reasons,
- Snake is sensitive to initialization and training, so we might not optimize the training processing of AudioDec with sanke (e.g. we didn't apply layer normalization, etc.)
- Snake is better than Leaky ReLU but we use ELU here.
Since we gave it a quick try without carefully tuning the hyperparameters, further investigations are required.