anomalib icon indicating copy to clipboard operation
anomalib copied to clipboard

[Bug]: EfficientAd strange anomaly map results

Open CarlosNacher opened this issue 1 year ago • 4 comments

Describe the bug

I know there was a PR that supposedly fixed generl performance problems with EfficientAd model (#2015). However, I am trying (with latest anomalib version) to train EfficientAd on MVTec bottle and I have found several problems.

I relate them sepparately below:

  • From one hand, the model appears to work well but it gives some strange "circles" as anomalies on the borders of the images. image even in good samples even in good samples

  • From another hand, it sepparates perfectly the distributions (conf matrix perfect) but it seems to have wrong scales, like too sparse (which I think it can be due to #2027), maybe a sigmoid function would fix this?, proofs following: image

  • Also, and I think it is related with the previous point (abaout scales when normalizing), even that the segmentation map looks okay (despite the "border circles"), the anomaly map looks that have always same value constant (#1647) image As said in #1647 I tried both normalization_metod = min_max ( above results) and normalization_method = none, giving below results: image It looks like the map is being normalized with negative values or something because the more blue pixels are the segmented ones (when the expected behaviour is that the red ones to be the most anomalous). It is strange. As a note: the dataset normalization is retrieved from EfficientAd default transforms (none because in forward pass there is already imagenet norm).

  • I have tried all possible combinations of pad_maps and padding hyperparameters. The best are the default ones (pad_maps = True. padding = False).

Dataset

MVTec

Model

Other (please specify in the field below)

Steps to reproduce the behavior

Train EfficientAd on MVTec bottle with default configurations.

OS information

OS information:

  • OS: Windows 10 Pro
  • Python version: 3.10.14
  • Anomalib version: 1.2.0dev
  • PyTorch version: 2.2.0+cu121
  • CUDA/cuDNN version: 12.5/8.9.7.29
  • GPU models and configuration: 1x GeForce RTX 3090
  • Any other relevant information: No

Expected behavior

EfficientAd to work well in MVTec

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

1.2.0dev

Configuration YAML

data:
  class_path: anomalib.data.MVTec
  init_args:
    root: data/external/MVTec
    category: bottle
    train_batch_size: 1
    eval_batch_size: 1
    num_workers: 8
    task: segmentation
    transform: null
    train_transform: null
    eval_transform: null
    test_split_mode: from_dir
    test_split_ratio: 0.2
    val_split_mode: same_as_test
    val_split_ratio: 0.5
    seed: null
model:
  class_path: anomalib.models.EfficientAd
  init_args:
    imagenet_dir: data/external/imagenette
    teacher_out_channels: 384
    model_size: S
    lr: 0.0001
    weight_decay: 1.0e-05
    padding: false
    pad_maps: true
normalization:
  normalization_method: none
metrics:
  image:
  - F1Score
  - AUROC
  pixel:
  - F1Score
  - AUROC
  threshold:
    class_path: anomalib.metrics.F1AdaptiveThreshold
    init_args:
      default_value: 0.5
logging:
  log_graph: false
seed_everything: 6120
task: segmentation
default_root_dir: results
ckpt_path: null
trainer:
  accelerator: auto
  strategy: auto
  devices: 1
  num_nodes: 1
  precision: 32
  logger:
  - class_path: anomalib.loggers.AnomalibWandbLogger
    init_args:
      project: mvtec-bottle
  - class_path: anomalib.loggers.AnomalibMLFlowLogger
    init_args:
      experiment_name: mvtec-bottle
  callbacks:
  - class_path: anomalib.callbacks.checkpoint.ModelCheckpoint
    init_args:
      dirpath: weights/lightning
      filename: best_model-{epoch}-{image_F1Score:.2f}
      monitor: image_F1Score
      save_last: true
      mode: max
      auto_insert_metric_name: true
  - class_path: lightning.pytorch.callbacks.EarlyStopping
    init_args:
      patience: 5
      monitor: image_F1Score
      mode: max
  fast_dev_run: false
  max_epochs: 1000
  min_epochs: null
  max_steps: 70000
  min_steps: null
  max_time: null
  limit_train_batches: 1.0
  limit_val_batches: 1.0
  limit_test_batches: 1.0
  limit_predict_batches: 1.0
  overfit_batches: 0.0
  val_check_interval: 1.0
  check_val_every_n_epoch: 1
  num_sanity_val_steps: 0
  log_every_n_steps: 50
  enable_checkpointing: true
  enable_progress_bar: true
  enable_model_summary: true
  accumulate_grad_batches: 1
  gradient_clip_val: 0
  gradient_clip_algorithm: norm
  deterministic: false
  benchmark: false
  inference_mode: true
  use_distributed_sampler: true
  profiler: null
  detect_anomaly: false
  barebones: false
  plugins: null
  sync_batchnorm: false
  reload_dataloaders_every_n_epochs: 0
  default_root_dir: null

Logs

No

Code of Conduct

  • [X] I agree to follow this project's Code of Conduct

CarlosNacher avatar Jun 14 '24 08:06 CarlosNacher

Hello, I will number the problems you have mentioned so it is easier to answer:

  1. False positive "circles" I will look into this. It seems like a corner case.

  2. Sparse distribution of scores As I understand it, score distribution in the case of EfficientAD can be very sparse because of the inner normalization of anomaly maps done on good images only, so some defects might cause very high scores.

  3. Heat maps look strange I have tried to run it with normalization_method: none, and I got this anomaly map which look as expected (trained for 5 epochs): example

With normalization_method: minmax, the maximum score for EfficientAD can be very high, as shown on a plot in question 2, so normalizing with such a high value can cause that strange anomaly map.

  1. pad_maps and padding hyperparameters It is normal, please use default values.

abc-125 avatar Jun 25 '24 16:06 abc-125

@CarlosNacher How did you fix this heatmap?

rishabh-akridata avatar Sep 30 '24 13:09 rishabh-akridata

@abc-125 @CarlosNacher Which file is normalization_method in? How to set normalization_method=none in TorchInferencer during inference?

watertianyi avatar Nov 26 '24 06:11 watertianyi

@abc-125 @CarlosNacher The problem still exists in v2.0.0b2 version, the entries in anomaly map only one value, a strange color map, may i ask how can i fix it?

wuliwuliy avatar Feb 23 '25 06:02 wuliwuliy

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Jul 15 '25 05:07 github-actions[bot]

This issue was closed because it has been stalled for 14 days with no activity.

github-actions[bot] avatar Jul 29 '25 05:07 github-actions[bot]