LaMP
LaMP copied to clipboard
Visualizations for Interpretability
Hi,
I recently came across your paper when I was looking for some multi-label classification techniques. Yours is a very interesting work, and thank you very much for making your code publicly available. A major reason I am interested in this work is the claim of interpretability. I know it has been some time since this code was written, but I have a question regarding that and it would be great if you could give some insights.
Do you remember how you generated the 3 visualisations mentioned in the paper? I noticed some configurations such as int_preds, and attns_loss in the paper. However, I am not sure how you exactly did that and it would be great to get some insights on that.