neural-audio-fp
neural-audio-fp copied to clipboard
Neural Audio Fingerprint for High-specific Audio Retrieval based on Contrastive Learning
About
- This is an official code and dataset release by authors (since July 2021) for reproducing neural audio fingerprint.
- Previously, there was a PyTorch implementation by Yi-Feng Chen.
- :eight_spoked_asterisk: Sound DEMO available now.
Requirements
Minimum:
- NVIDIA GPU with CUDA 10+
- 25 GB of free SSD space for mini dataset experiments
More info
System requirements to reproduce the ICASSP result
- CPU with 8+ threads
- NVIDIA GPU with 11+ GB V-memory
- SSD free space 500+ GB for full-scale experiment
tarextraction temporarily requires additional free space 440 GB.
Recommended batch-size for GPU
| Device | Recommended BSZ |
|---|---|
| 1080ti, 2080ti (11GB), Titan X, Titan V (12GB), AWS/GCP V100(16 GB) | 320 |
| Quadro RTX 6000 (24 GB), 3090 (24GB) | 640 |
| V100v2 (32GB), AWS/GCP A100 (40 GB) | 1280 |
| ~~TPU~~ | ~~5120~~ |
- The larger the BSZ, the higher the performance.
- To allow the use of a larger BSZ than actual GPU memory, one trick is to
remove
allow_gpu_memory_growth()from the run.py.
Install
Docker
# CUDA 10.1-based image
docker pull mimbres/neural-audio-fp:latest
# CUDA 11.2-based image for RTX 30x0 and later
docker pull mimbres/neural-audio-fp:cuda11.2.0-cudnn8
Create a custom image from Dockerfile
Requirements
- NVIDIA driver >= 450.80.02
- Docker > 20.0
Create
You can create an image through Dockerfile and environment.yml.
git clone https://github.com/mimbres/neural-audio-fp.git
cd neural-audio-fp
docker build -t neural-audio-fp .
Further information
- Intel CPU users can remove
libopenblasfrom Dockerfile. FaissandNumpyare optimized for Intel MKL.- Image size is about 12 GB or 6.43 GB (compressed).
- To optimize GPU-based search speed, install from the source.
Conda
Create a virtual environment via .yml
Requirements
NVIDIA driver >= 450.80.02,CUDA >= 11.0andcuDNN 8(Compatiability)NVIDIA driver >= 440.33,CUDA == 10.2andcuDNN 7(Compatiability)- Anaconda3 or Miniconda3 with Python >= 3.6
Create
After checking the requirements,
git clone https://github.com/mimbres/neural-audio-fp.git
cd neural-audio-fp
conda env create -f environment.yml
conda activate fp
Create a virtual environment without .yml
# Python 3.8: installing in the same virtual environment
conda create -n YOUR_ENV_NAME
conda install -c anaconda -c pytorch tensorflow=2.4.1=gpu_py38h8a7d6ce_0 cudatoolkit faiss-gpu=1.6.5
conda install pyyaml click matplotlib
conda install -c conda-forge librosa
pip install kapre wavio
If your installation fails at this point and you don't want to build from source...:thinking:
- Try installing
tensorflowandfaiss-gpu=1.6.5(not 1.7.1) in separate environments.
#After creating a tensorflow environment for training...
conda create -n YOUR_ENV_NAME
conda install -c pytorch faiss-gpu=1.6.5
conda install pyyaml, click
Now you can run search & evaluation by
python eval/eval_faiss.py --help
Dataset
| Dataset-mini v1.1 (11.2 GB) | Dataset-full v1.1 (443 GB) | |
|---|---|---|
| tar | :eight_spoked_asterisk:kaggle / gdrive | dataport(open-access) |
| raw | gdrive | gdrive |
- The only difference between these two datasets is the size of 'test-dummy-db'.
So you can first train and test with
Dataset-mini.Dataset-fullis for testing in 100x larger scale. - You can download the
Dataset-miniviakaggleCLI (recommended).- Sign in kaggle -> Account -> API -> Create New Token -> download
kaggle.json
- Sign in kaggle -> Account -> API -> Create New Token -> download
pip install --user kaggle
cp kaggle.json ~/.kaggle/ && chmod 600 ~/.kaggle/kaggle.json
kaggle datasets download -d mimbres/neural-audio-fingerprint
100%|███████████████████████████████████| 9.84G/9.84G [02:28<00:00, 88.6MB/s]
Dataset installation
This dataset includes all music sources, background noises, impulse-reponses (IR) samples that can be used for reproducing the ICASSP results.
Directory location
The default directory of the dataset is ../neural-audio-fp-dataset. You can
change the directory location by modifying config/default.yaml.
.
├── neural-audio-fp-dataset
└── neural-audio-fp
Structure of dataset
neural-audio-fp-dataset/
├── aug
│ ├── bg <=== Audioset, Pub/cafe etc. for background noise mix
│ ├── ir <=== IR data for microphone and room reverb simulatio
│ └── speech <=== subset of common-voice, NOT USED IN THE PAPER RESULT
├── extras
│ └── fma_info <=== Meta data for music sources.
└── music
├── test-dummy-db-100k-full <== 100K songs of full-lengths
├── test-query-db-500-30s <== 500 songs (30s) and 2K synthesized queries
├── train-10k-30s <== 10K songs (30s) for training
└── val-query-db-500-30s <== 500 songs (30s) for validation/mini-search
The data format is 16-bit 8000 Hz PCM Mono WAV. README.md and LICENSE is
included in the dataset for more details.
Checksum for Dataset-full
Install checksumdir.
pip install checksumdir
Compare checksum.
checksumdir -a md5 neural-audio-fp-dataset
# aa90a8fbd3e6f938cac220d8aefdb134
checksumdir -a sha1 neural-audio-fp-dataset
# 5bbeec7f5873d8e5619d6b0de87c90e180363863d
Quickstart
There are 3 basic COMMAND s for each step.
# Train
python run.py train CHECKPOINT_NAME
# Generate fingreprint
python run.py generate CHECKPOINT_NAME
# Search & Evalutaion (after generating fingerprint)
python run.py evaluate CHECKPOINT_NAME CHECKPOINT_INDEX
Help for run.py client and its commands.
python run.py --help
python run.py COMMAND --help
More Features
Click to expand each topic.
Managing Checkpoint
python run.py train CHECKPOINT_NAME CHECKPOINT_INDEX
- If
CHECKPOINT_INDEXis not specified, the training will resume from the latest checkpoint. - In
defaultconfiguration, all checkpoints are stored inlogs/checkpoint/CHECKPOINT_NAME/ckpt-CHECKPOINT_INDEX.index.
Training
python run.py train CHECKPOINT --max_epoch=100 -c default
Notes:
- Check batch-size that fits on your device first.
- The
defaultconfig is setTR_BATCH_SZ=120 withOPTIMIZER=Adam. - For
TR_BATCH_SZ>= 240,OPTIMIZER=LAMBis recommended. - For
TR_BATCH_SZ>= 1280,LR=1e-4can be too small. - In NTxent loss function, the best temperature parameter
TAUis in the range of [0.05, 0.1]. - Augmentation strategy is quite important. This topic deserves further discussion.
Config File
The config file is located in config/CONFIG_NAME.yaml.
You can edit directory location, data selection, hyperparameters for
model and optimizer, batch-size, strategies for time-domain and
spectral-domain augmentation chain, etc. After training, it is important
to keep the config file in order to restore the model.
python run.py COMMAND -c CONFIG
When using generate command, it is important to use the same config that was used
in training.
Fingerprint Generation
python run.py generate CHECKPOINT_NAME # from the latest checkpoint
python run.py generate CHECKPOINT_NAME CHECKPOINT_INDEX -c CONFIG_NAME
# Location of the generated fingerprint
.
└──logs
└── emb
└── CHECKPOINT_NAME
└── CHECKPOINT_INDEX
├── db.mm
├── db_shape.npy
├── dummy_db.mm
├── dummy_db_shape.npy
├── query.mm
└── query_shape.npy
By default config, generate will generate embeddings (or fingerprints)
from 'dummy_db', test_query and test_db. The generated embeddings will
be located in logs/emb/CHECKPOINT_NAME/CHECKPOINT_INDEX/**.mm and
**.npy.
dummy_dbis generated from the 100K full-length dataset.- In the
DATASELsection of config, you can select options for a pair ofdbandquerygeneration. The default isunseen_icassp, which uses a pre-defined test set. - It is possilbe to generate only the
dbandquerypairs by--skip_dummyoption. This is a frequently used option to avoid overwriting the most time-consumingdummy_dbfingerprints in every experiment. - It is also possilbe to generate embeddings (or fingreprints) from your custom source.
python run.py generate --source SOURCE_ROOT_DIR --output FP_OUTPUT_DIR --skip_dummy # for custom audio source
python run.py generate --help # more details...
Search & Evaluation
The following command will construct a faiss.index from the generated
embeddings or fingerprints located at
logs/emb/CHECKPOINT_NAME/CHECKPOINT_INDEX/.
# faiss-gpu
python run.py evaluate CHECKPOINT_NAME CHECKPOINT_INDEX [OPTIONS]
# faiss-cpu
python run.py evaluate CHECKPOINT_NAME CHECKPOINT_INDEX --nogpu
In addition, you can choose one of the --index_type (default is IVFPQ)
from the table below:
| Type of index | Description |
|---|---|
l2 |
L2 distance |
ivf |
Inverted File Index (IVF) |
ivfpq |
Product Quantization (PQ) with IVF :book: |
ivfpq-rr |
IVF-PQ with re-ranking |
~~ivfpq-rr-ondisk~~ |
~~IVF-PQ with re-ranking on disk search~~ |
hnsw |
Hierarchical Navigable Small World :book: |
python run.py evaluate CHECKPOINT_NAME CHECKPOINT_INDEX --index_type IVFPQ
Currently, few options for Faiss settings are available in run.py client.
Instead, you can directly run:
python eval/eval_faiss.py EMB_DIR --index_type IVFPQ --kprobe 20 --nogpu
python eval/eval_faiss.py --help
Note that eval_faiss.py does not require Tensorflow.
Tensorboard
Tensorboard is enabled by default in the ['TRAIN'] section of the config file.
# Run Tensorboard
tensorboard --logdir=logs/fit --port=8900 --host=0.0.0.0
Build DB & Search
Here is an overview of the system for building and retrieving the database. The system and 'matcher' algorithm are not detailed in the paper. But it's very simple as in this code.
Plan
- Now working on
tf.data-based new data pipeline for multi-GPU and TPU support. - One page Colab demo.
- This project is currently based on Faiss, which provides the fastest large-scale vector searches.
- Milvus is also worth watching as it is an active project aimed at industrial scale vector search.
Augmentation Demo and Scoreboard
Augmentation demo was generated by dataset2wav.py.
External links
- (Unofficial) PyTorch implementation by Yi-Feng Chan.
Acknowledgement
This project has been supported by the TPU Research Cloud (TRC) program.
Cite
@conference {chang2021neural,
author={Chang, Sungkyun and Lee, Donmoon and Park, Jeongsoo and Lim, Hyungui and Lee, Kyogu and Ko, Karam and Han, Yoonchang},
title={Neural Audio Fingerprint for High-specific Audio Retrieval based on Contrastive Learning},
booktitle={International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021)},
year = {2021}
}