audioLIME
audioLIME copied to clipboard
audioLIME: Listenable Explanations Using Source Separation
audioLIME

This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music information retrival (MIR). audioLIME is based on the method LIME (local interpretable model-agnostic explanations), work presented in this paper and uses source separation estimates in order to create interpretable components. Alternative types of interpretable components are available (see the last section) and more will be added in the future.
Citing
If you use audioLIME in your work, please cite it:
@misc{haunschmid2020audiolime,
title={{audioLIME: Listenable Explanations Using Source Separation}},
author={Verena Haunschmid and Ethan Manilow and Gerhard Widmer},
year={2020},
eprint={2008.00582},
archivePrefix={arXiv},
primaryClass={cs.SD},
howpublished={13th International Workshop on Machine Learning and Music}
}
Publications
audioLIME is introduced/used in the following publications:
-
Verena Haunschmid, Ethan Manilow and Gerhard Widmer, audioLIME: Listenable Explanations Using Source Separation
- paper: arxiv
- code: branch
mml2020and mml2020-experiments
-
Verena Haunschmid, Ethan Manilow and Gerhard Widmer, Towards Musically Meaningful Explanations Using Source Separation
- paper: arxiv
-
Alessandro B. Melchiorre, Verena Haunschmid, Markus Schedl and Gerhard Widmer, LEMONS: Listenable Explanations for Music recOmmeNder Systems
-
Shreyan Chowdhury, Verena Praher and Gerhard Widmer, Tracing Back Music Emotion Predictions to Sound Sources and Intuitive Perceptual Qualities
-
Verena Praher(*), Katharina Prinz(*), Arthur Flexer and Gerhard Widmer, On the Veracity of Local, Model-agnostic Explanations in Audio Classification
- (*) equal contribution
- preprint
- code: audioLIME v0.0.3 and veracity
Installation
The audioLIME package is not on PyPI yet. For installing it, clone the git repo and install it using
setup.py.
git clone https://github.com/CPJKU/audioLIME.git # HTTPS
git clone [email protected]:CPJKU/audioLIME.git # SSH
cd audioLIME
python setup.py install
To install a version for development purposes check out this article.
Tests
To test your installation, the following test are available (more to come :)):
python -m unittest tests.test_SpleeterFactorization
Note on Requirements
To keep it lightweight, not all possible dependencies are contained in setup.py.
Depending on the factorization you want to use, you might need different packages,
e.g. spleeter.
Installation & Usage of spleeter
pip install spleeter==2.0.2
When you're using spleeter for the first time, it will download the used model in a directory
pretrained_models. You can only change the location by setting an environment variable
MODEL_PATH before spleeter is imported. There are different ways to
set an environment variable,
for example:
export MODEL_PATH=/path/to/spleeter/pretrained_models/
Available Factorizations
Currently we have the following factorizations implemented:
- SpleeterFactorization based on the source separation system spleeter (code)
- TimeFrequencyTorchFactorization: time-frequency segmentation based on SoundLIME (the original implementation was not flexible enough for our experiments)
- ImageLikeFactorization: super pixel segmentation, as proposed in the original paper used for images.