PyTorch_Speaker_Verification icon indicating copy to clipboard operation
PyTorch_Speaker_Verification copied to clipboard

How to use dvector_create.py

Open zeyuanchen23 opened this issue 5 years ago • 7 comments

Hi!

Could you please explain how to run dvector_create.py on the TIMIT dataset?

This program tries to load some .wav files (line 91). However, the original data in TIMIT are .WAV files. After preprocessing, they are converted to .npy files. But where to find .wav files?

Thanks!

zeyuanchen23 avatar May 13 '19 02:05 zeyuanchen23

Actually, .WAV file is a spp file. You need to write a code to convert it to .wav file. But the main problem is that, the preprocessing audio input for training embedding differs from the way it creating D-vector (dvector_create.py) for speaker diarization. Also, TIMIT dataset have no meaning for speaker diarization problem since all files contain only 1 speaker.

wrongbattery avatar May 13 '19 10:05 wrongbattery

Actually, .WAV file is a spp file. You need to write a code to convert it to .wav file. But the main problem is that, the preprocessing audio input for training embedding differs from the way it creating D-vector (dvector_create.py) for speaker diarization. Also, TIMIT dataset have no meaning for speaker diarization problem since all files contain only 1 speaker.

Thanks for your reply. @wrongbattery I saw that the preprocessing for training and d-vector creation are different. About TIMIT, the website says TIMIT contains broadband recordings of 630 speakers of eight major dialects of American English, each reading ten phonetically rich sentences. Does it mean that a model trained on TIMIT can be used for the Diarization task on other datasets (e.g. AMI )?

zeyuanchen23 avatar May 16 '19 18:05 zeyuanchen23

Diarization task need a good embedding to perform well, from the Uirnn authors they said it needs at least 5k speaker to have a good embeddings. I already use TIMIT for training and do Diarization task on AMI dataset and the results are very pure compare to Pyannote repo. Seem both Uirnn and this repo are incomplete version so you need to implement lots of function to train on new dataset.

wrongbattery avatar May 17 '19 08:05 wrongbattery

@wrongbattery d-vectors with dimension [N, 256] with N the number of sliding windows should be the input of uis-rnn (train-sequence) but the problem is that the cluster-id in d-vector should be extracted from labels of the dataset

nidhal1231 avatar May 20 '19 15:05 nidhal1231

I have a workaround for the wav creator for TIMIT, which I have put up in my forked repo. https://github.com/pravn/PyTorch_Speaker_Verification/blob/master/VAD_segments.py

In VAD create, for TIMIT, it can complain that the RIFF headers aren't alright, so we rewrite the file into 'tmp.wav' and work from there.

'''
try: file = path wave.open(path,'rb')

except wave.Error:
    #print('Exception')
    #print('========')
    tmp, _ = librosa.load(path, sr)
    sf.write('tmp.wav', tmp, sr)
    file = 'tmp.wav'

'''

pravn avatar Aug 20 '19 22:08 pravn

Has anyone figured this out? @pravn's change allowed me to generate the .npy files for the TIMIT dataset. But if I put them into corresponding training and testinig npz files, and run them with the demo.py file in the uis-rnn repo, it trains fine, but then fails on testing, saying the testing data is in the wrong format.

It looks like dvector_create.py outputs a 1-d array of floats for each testing data point but uis-rnn expects a 2d array. Am I missing something?

chrisspen avatar Oct 16 '19 16:10 chrisspen

@wrongbattery
Have you trained the speaker diarization model using UISRNN completely? If yes pls enlighten us as there is a lot of confusion going on.

Can you describe what all pieces are missing and dataset requirement and anything else which will be helpful for others...

Cheers!

008karan avatar Feb 01 '20 08:02 008karan