concept-based-xai
concept-based-xai copied to clipboard
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Concept-based XAI Library
CXAI is an open-source library for research on concept-based Explainable AI (XAI).
CXAI supports a variety of different models, datasets, and evaluation metrics, associated with concept-based approaches:
High-level Specs:
Methods:
- Now You See Me (CME): Concept-based Model Extraction.
- Concept Bottleneck Models
- Weakly-Supervised Disentanglement Without Compromises
- Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
- Concept Whitening for Interpretable Image Recognition
- On Completeness-aware Concept-Based Explanations in Deep Neural Networks
- Towards Robust Interpretability with Self-Explaining Neural Networks
Datasets:
to get the datasets run script datasets/download_datasets.sh
Requirements
- Python 3.7 - 3.8
- See 'requirements.txt' for the rest of required packages
Installation
If installing from the source, please proceed by running the following command:
python setup.py install
This will install the concepts-xai
package together with all its dependencies.
To test that the package has been successfully installed, you may run:
import concepts_xai
help("concepts_xai")
to display all the subpackages included from this installation.
Subpackages
-
datasets
: datasets to use, including task functions. -
evaluation
: different evaluation metrics to use for evaluating our methods. -
experiments
: experimental setups (To-be-added soon) -
methods
: defines the concept-based methods. Note: SSCC defines wrappers around these methods, that turn then into semi-supervised concept labelling methods. -
utils
: contains utility functions for model creation as well as data management.
Citing
If you find this code useful in your research, please consider citing:
@article{kazhdan2021disentanglement,
title={Is Disentanglement all you need? Comparing Concept-based \& Disentanglement Approaches},
author={Kazhdan, Dmitry and Dimanov, Botty and Terre, Helena Andres and Jamnik, Mateja and Li{\`o}, Pietro and Weller, Adrian},
journal={arXiv preprint arXiv:2104.06917},
year={2021}
}
This work has been presented at the RAI, WeaSuL, and RobustML workshops, at The Ninth International Conference on Learning Representations (ICLR 2021).