AAND icon indicating copy to clipboard operation
AAND copied to clipboard

[IEEE-TIP R2]Advancing Pre-trained Teacher: Towards Robust Feature Discrepancy for Anomaly Detection

Advancing Pre-trained Teacher: Towards Robust Feature Discrepancy for Anomaly Detection

Overview

There are two underlying assumptions in KD-based anomaly detection framework. Assumption I: The teacher model can represent two separable distributions for the normal and abnormal patterns; Assumption II: the student model can only reconstruct the normal distribution. In this paper, we propose a simple yet effective two-stage anomaly detection framework, termed AAND, which comprises an Anomaly Amplification Stage Stage I to address Assumption I and a Normality Distillation Stage Stage II to address Assumption II.

Author

Canhui Tang, Sanping Zhou, Yizhe Li, Yonghao Dong, Le Wang

Xi'an Jiaotong University

News

🔥 2025.09: Awaiting SAE Decision approval

🔥 2025.05: Accept with Mandatory Minor Revisions

🔥 2024.06: Our another KD-based Project VAND-GNL won the 2nd Place of CVPR 2024 VAND2.0 Challenge

🔧 Installation

Please use the following command for installation.

# It is recommended to create a new environment
conda create -n AAND python==3.8
conda activate AAND

pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113

# Install packages and other dependencies
pip install -r requirements.txt

💾 Dataset

  • MVTec AD
  • VisA, use visa.py to generate meta.json
  • MVTec-3D, we only use the rgb images, called MVTec3D-RGB in our paper.
  • DRAEM_dtd, used as the auxillary texture datasets for synthesizing anomalies like DRAEM.
<your_path>
├── mvtec
    ├── bottle
        ├── train
        ├── test
        ├── ground_truth
    ├── ...

├── VisA
    ├── meta.json
    ├── candle
        ├── Data
        ├── image_anno.csv
    ├── ...

├── mvtec3d
    ├── bagel
        ├── train
            ├── good
                ├── rgb (we only use rgb)
                ├── xyz
        ├── test
        ├── ...

├── DRAEM_dtd
    ├── dtd
        ├── images
            ├── ...

Preprocessing

  • Extract foreground mask for training images.
python scripts/fore_extractor.py --data_path <your_path>/<dataset_name>/ --aux_path <your_path>/dtd/images/  # the <dataset_name> is mvtec, VisA, or mvtec3d

🚅 Training

You can train models on mvtec, VisA, or mvtec3d by the following commands:

python train.py --data_root <your_path>/<dataset_name>/  # the <dataset_name> is mvtec, VisA, or mvtec3d

⛳ Testing

You can test the trained models on mvtec, VisA, or mvtec3d by the following commands:

python test.py --data_root <your_path>/<dataset_name>/  # the <dataset_name> is mvtec, VisA, or mvtec3d

Citation

@article{tang2024advancing,
  title={Advancing Pre-trained Teacher: Towards Robust Feature Discrepancy for Anomaly Detection},
  author={Tang, Canhui and Zhou, Sanping and Li, Yizhe and Dong, Yonghao and Wang, Le},
  journal={arXiv preprint arXiv:2405.02068},
  year={2024}
}

Acknowledgements