xtal2png
xtal2png copied to clipboard
interpretability of models trained on xtal2png
one interesting application of this representation might be that it might be explainable in a useful form.
If we train a model on the image, we can then use one of the established interpretability techniques to obtain a mask that highlights the 'important' parts of the image. If we decode this, we have the relevant structural fragments (and could also mine them - and potentially use them to assemble new structures).
the advantage of doing this on the image representation and not with a GNN is that the image should have fewer issues with longer-range interactions
Following up from our chat, maybe the following two could be combined without too much hassle:
- https://github.com/jacobgil/pytorch-grad-cam
- https://github.com/sparks-baird/xtal2png/blob/main/notebooks/2.1-xtal2png-cnn-classification.ipynb
Hey what is the progress of this project. I am interested to work on this
I don't think either @kjappelbaum or I have immediate plans to explore the interpretability piece. Feel free to give it a try and let us know how it goes! Happy to provide feedback or suggestions.