speech-representations
speech-representations copied to clipboard
Code for DeCoAR (ICASSP 2020) and BERTphone (Odyssey 2020)
Speech Representations
Models and code for deep learning representations developed by the AWS AI Speech team:
- DeCoAR (self-supervised contextual representations for speech recognition)
- BERTphone (phonetically-aware acoustic BERT for speaker and language recognition)
- DeCoAR 2.0 (deep contextualized acoustic representation with vector quantization)
Installation
We provide a library and CLI to featurize speech utterances. We hope to release training/fine-tuning code in the future.
Kaldi should be installed to kaldi/, or $KALDI_ROOT should be set.
We expect Python 3.6+. Our bertphone models are defined in MXNet and DeCoAR model is defined in Pytorch. Clone this repository, then:
pip install -e .
pip install mxnet-mkl~=1.6.0 # ...or mxnet-cu102mkl for GPU w/ CUDA 10.2, etc.
pip install gluonnlp # optional; for featurizing with bertphone
pip install torch fairseq
Pre-trained models
First, download the model weights:
mkdir artifacts
# For DeCoAR trained on LibriSpeech (257M)
wget https://speech-representation.s3.us-west-2.amazonaws.com/checkpoint_decoar.pt
# For BertPhone_8KHz(λ=0.2) trained on Fisher
wget -qO- https://apache-mxnet.s3-us-west-2.amazonaws.com/gluon/models/bertphone_fisher_02-87159543.zip | zcat > artifacts/bertphone_fisher_02-87159543.params
# For Decoar 2.0:
wget https://speech-representation.s3.us-west-2.amazonaws.com/checkpoint_decoar2.pt
We support featurizing individual files with the CLI:
speech-reps featurize --model {decoar,bertphone,decoar2} --in-wav <input_file>.wav --out-npy <output_file>.npy
# --params <file>: load custom weights (otherwise use `artifacts/`)
# --gpu <int>: use GPU (otherwise use CPU)
or in code:
from speech_reps.featurize import DeCoARFeaturizer
# Load the model on GPU 0
featurizer = DeCoARFeaturizer('artifacts/decoar-encoder-29b8e2ac.params', gpu=0)
# Returns a (time, feature) NumPy array
data = featurizer.file_to_feats('my_wav_file.wav')
We plan to support Kaldi .scp and .ark files soon. For now, batches can be processed with the underlying featurizer._model.
References
If you found our package or pre-trained models useful, please cite the relevant work:
@inproceedings{decoar,
author = {Shaoshi Ling and Yuzong Liu and Julian Salazar and Katrin Kirchhoff},
title = {Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition},
booktitle = {{ICASSP}},
pages = {6429--6433},
publisher = {{IEEE}},
year = {2020}
}
@inproceedings{bertphone,
author = {Shaoshi Ling and Julian Salazar and Yuzong Liu and Katrin Kirchhoff},
title = {BERTphone: Phonetically-aware Encoder Representations for Speaker and Language Recognition},
booktitle = {{Speaker Odyssey}},
publisher = {{ISCA}},
year = {2020}
}
@misc{ling2020decoar,
title={DeCoAR 2.0: Deep Contextualized Acoustic Representations with Vector Quantization},
author={Shaoshi Ling and Yuzong Liu},
year={2020},
eprint={2012.06659},
archivePrefix={arXiv},
primaryClass={eess.AS}
}