fAIr icon indicating copy to clipboard operation
fAIr copied to clipboard

Transparency increase through using explainability analysis on ML models.

Open pomodoren opened this issue 1 year ago • 1 comments

Is your feature request related to a problem? Please describe. It is of value to add a layer of understanding on how the models are making a decision, and to try to follow the decision (of classification) for the inner layers of the deep learning models (like U-Net or ResNet). This can increase the transparency and understanding on how the models are acting. This is more common in use-cases of medicine, but probably can be transferable to fAIr models. Not creating it can create a disruption between the contributors and model-creators, making "magic" models.

Describe the solution you'd like Together with the models, a downloadable report on the segmentation. Practically, you see that a model does well on the buildings that have a specific property (say - round shape), and less well on the other buildings. This can be done through Shapley values and similar methodology.

Describe alternatives you've considered

  • NA

Additional context

  • this issue is based on the discussion on FOSS4G conference: https://2023.foss4g.org/
  • check shap package for insight (Image examples): https://shap.readthedocs.io/en/latest/index.html

Tasks

  • [ ] Take example model, example data
  • [ ] Test shap, find misclassification reasoning
  • [ ] Create sample report
  • [ ] Update report on model

pomodoren avatar Jun 27 '23 15:06 pomodoren