xtal2png icon indicating copy to clipboard operation
xtal2png copied to clipboard

interpretability of models trained on xtal2png

Open kjappelbaum opened this issue 2 years ago • 3 comments

one interesting application of this representation might be that it might be explainable in a useful form.

If we train a model on the image, we can then use one of the established interpretability techniques to obtain a mask that highlights the 'important' parts of the image. If we decode this, we have the relevant structural fragments (and could also mine them - and potentially use them to assemble new structures).

the advantage of doing this on the image representation and not with a GNN is that the image should have fewer issues with longer-range interactions

kjappelbaum avatar Jul 27 '22 09:07 kjappelbaum

Following up from our chat, maybe the following two could be combined without too much hassle:

  • https://github.com/jacobgil/pytorch-grad-cam
  • https://github.com/sparks-baird/xtal2png/blob/main/notebooks/2.1-xtal2png-cnn-classification.ipynb

sgbaird avatar Aug 05 '22 02:08 sgbaird

Hey what is the progress of this project. I am interested to work on this

HarshaSatyavardhan avatar Oct 09 '23 07:10 HarshaSatyavardhan

I don't think either @kjappelbaum or I have immediate plans to explore the interpretability piece. Feel free to give it a try and let us know how it goes! Happy to provide feedback or suggestions.

sgbaird avatar Nov 10 '23 17:11 sgbaird