SelfAttentive
SelfAttentive copied to clipboard
cannot visualize attention weights
I have successfully completed the training and also extracted the attention weights. I get the following error when I try to run the Attention Visualization notebook
16
17 vector = weights.sum( 0 )
---> 18 vector = vector / vector.sum( 1 )[ 0,0 ]
19 att, ids_to_show = vector.sort( 1, descending=True )
20
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
Can you let me know what is wrong here?
I think this should be due to pytorch version change. At least we have a matrix of 30x review_len for the weights varialbe for each review. This is the weight we want to manipulate for visualization. You can then follow the steps in the paper for different variants of attention visualization.
vector = weights.sum( 0 )
vector = vector / vector.sum()
att, ids_to_show = vector.sort( 0, descending=True )
Try this. It sums all the 30 attention weights and normalized.
Thanks! This seems to work for the word attention visualization. For the sentence visualization, I tried this
weights = Weights[ batchedReviewsAndTokens[ reviewID ][3] ][ 0 ][ batchedReviewsAndTokens[ reviewID ][ 4 ] ].data
vector = weights
instead of the original
weights = Weights[ batchedReviewsAndTokens[ reviewID ][3] ][ 0 ][ batchedReviewsAndTokens[ reviewID ][ 4 ] ].data
vector = weights.sum( 0 )
It seems to work, but I still want to confirm that it is the correct way to do it.