wespeaker icon indicating copy to clipboard operation
wespeaker copied to clipboard

Research and Production Oriented Speaker Verification, Recognition and Diarization Toolkit

WeSpeaker

License Python-Version

Roadmap | Awesome Papers | Runtime (x86_gpu) | Pretrained Models

WeSpeaker mainly focuses on speaker embedding learning, with application to the speaker verification task. We support online feature extraction or loading pre-extracted features in kaldi-format.

Installation

  • Clone this repo
git clone https://github.com/wenet-e2e/wespeaker.git
  • Create conda env: pytorch version >= 1.10.0 is required !!!
conda create -n wespeaker python=3.9
conda activate wespeaker
conda install pytorch=1.10.1 torchaudio=0.10.1 cudatoolkit=11.3 -c pytorch -c conda-forge
pip install -r requirements.txt

Recipes

  • VoxCeleb: Speaker Verification recipe on the VoxCeleb dataset
    • 🔥 UPDATE 2022.7.19: We apply the same setups as the CNCeleb recipe, and obtain SOTA performance considering the open-source systems
    • 🔥 EER/minDCF on vox1-O-clean test set are 0.723%/0.069 (ResNet34) and 0.728%/0.099 (ECAPA_TDNN_GLOB_c1024), after LM fine-tuning and AS-Norm
  • CNCeleb: Speaker Verification recipe on the CnCeleb dataset
    • 🔥 UPDATE 2022.7.12: We are migrating the winner system of CNSRC 2022 report slides
    • 🔥 EER/minDCF reduction from 8.426%/0.487 to 6.492%/0.354 after large margin fine-tuning and AS-Norm
  • VoxConverse: 🔥 UPDATE 2022.7.2: Diarization recipe on the VoxConverse dataset

Support List:

Discussion

For Chinese users, you can scan the QR code on the left to follow our offical account of WeNet Community. We also created a WeChat group for better discussion and quicker response. Please scan the QR code on the right to join the chat group.

Looking for contributors

If you are interested to contribute, feel free to contact @wsstriving or @robin1001