Speech-Separation icon indicating copy to clipboard operation
Speech-Separation copied to clipboard

[Have you overcome overfitting problem]

Open vuthede opened this issue 4 years ago • 8 comments

Hi there, @vitrioil Just want to ask have you overcame the overfitting problem that you reported in README? Thanks, Do you have any idea of your overfitting? and any idea to overcome it? how much data you train on? thanks

vuthede avatar Oct 05 '20 01:10 vuthede

Hi,

I did not solve the issue. I tried with around 20k audio clips for 2 person speech separation only. I would assume now that more data than that would be required. I did not experiment too much with the model, simply because the time it took for training 1 epoch was always in 1-2 day range, so couple of epochs would take weeks. This would change depending upon your GPU and VRAM availability. I would say more than 16GB would be helpful. So, a lot of opportunity there to tweak the model. I also found this. It could be helpful.

vitrioil avatar Oct 05 '20 14:10 vitrioil

Hi, thanks for your quick reply. In the paper, it seems like they do some preprocessing to remove noise in the input? Do u think it might help ?

vuthede avatar Oct 05 '20 17:10 vuthede

You mean to say while preparing the dataset? Well, I've seen someone mention that here. However, adding additional noise like AudioSet might help regularise.

vitrioil avatar Oct 06 '20 13:10 vitrioil

Yeah, thanks, Ah although you got the overfitting problem, I am curious that could your model can distinguish voice of 2 people in some extent?

vuthede avatar Oct 07 '20 06:10 vuthede

Yes, you could make out in certain instances who was the main speaker in the separated output. But, not always. Sometimes, it was only noise or mix of both the speakers. For the most part the output was noisy. All of this was also applicable to training data, but not to a great extent. As I said a lot of time is required for a model/dataset this big.

vitrioil avatar Oct 07 '20 14:10 vitrioil

Probably related to https://github.com/vitrioil/Speech-Separation/issues/4

JuanFMontesinos avatar Feb 11 '21 17:02 JuanFMontesinos

Hi,

I did not solve the issue. I tried with around 20k audio clips for 2 person speech separation only. I would assume now that more data than that would be required. I did not experiment too much with the model, simply because the time it took for training 1 epoch was always in 1-2 day range, so couple of epochs would take weeks. This would change depending upon your GPU and VRAM availability. I would say more than 16GB would be helpful. So, a lot of opportunity there to tweak the model. I also found this. It could be helpful.

Hi @vitrioil Regarding the 20k audio clips, do you mean you downloaded 200 videos and extract from them the 20k audio clips? (200*199/2=19,000, this is the number of combination for creating the 20k audio clips given the 200 videos)

MordehayM avatar Jul 27 '21 22:07 MordehayM

Hi @MordehayM ,

I believe it was 20k unique clips. 200C2 is indeed 19k, however not all combinations are considered.

There is a parameter: REMOVE_RANDOM_CHANCE (in audio_mixer_generator.py). This will prevent from combination from blowing up, otherwise there will be a lot of files created. By default the value is 0.9

Hence, I was not taking all combinations of files.

vitrioil avatar Jul 28 '21 16:07 vitrioil