keras-vis
keras-vis copied to clipboard
Visualizing Saliency Map for Sentence Classification
Hi @raghakot ,
I just want to try the the module on a model for sentence classification. Actually, I would like to have plots as in this paper. Can you please let me know how to do this?
Regards, Nader
It should work out of the box. The input, in this case, is not an image. I will try to whip up a notebook showing how to do this. Do you have a pretrained network on some text classification corpus?
Hi @raghakot,
Many thanks for this. I am quite new in this field. So, excuse me for asking such a time-consuming thing.
Yes I do. Do you want me to share them with you?
Regards, Nader
Yep. Share them along with the dataset etc. I wanted to create examples in different domains anyways.
Alright, I sent the files to your email. Please note that I just trained the model for 10 epochs on the data available here, so it is completely simple model. Cheers, Nader
I will most likely get to it this weekend. Putting in work on office days is tricky.
On Jul 12, 2017, 11:16 PM -0700, ndrmahmoudi [email protected], wrote:
Alright, I sent the files to your email. Please note that I just trained the model for 10 epochs on the data available herehttps://github.com/shagunsodhani/CNN-Sentence-Classifier, so it is completely simple model. Cheers, Nader
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/raghakot/keras-vis/issues/57#issuecomment-314982168, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AO6vTEORnG-q_TiSVuE_Wjm5FHzCq9kgks5sNbYogaJpZM4OVdzm.
Did anything come of this issue? I would be interested in seeing the notebook for this application. My situation is similar, except it is for DNA classification, and the paper I am trying to emulate is: https://arxiv.org/pdf/1608.03644.pdf.
I did not get a chance to work on this yet. However, I suggest you take a look at this issue: https://github.com/raghakot/keras-vis/issues/63 which has some working code. I will try to make a clean notebook example as soon as I get a chance.
Hi all @pexmar,
I was working on it for like last 10 days and finally I have got a code adapted from here which gives saliency map for the sentences (see below).
def plot_saliency(loaded_model, pure_txt, text_class, pred_labels, text_sequence):
text_class = 'Positive' if text_class==1 else 'Negative'
pred_labels = 'Positive' if pred_labels==1 else 'Negative'
input_tensors = [loaded_model.input, K.learning_phase()]
model_input = loaded_model.layers[2].input # the input for convolution layer
model_output = loaded_model.output[0][1]
gradients = loaded_model.optimizer.get_gradients(model_output,model_input)
compute_gradients = K.function(inputs=input_tensors, outputs=gradients)
matrix = compute_gradients([text_sequence.reshape(1,30), text_class])[0][0]
matrix = matrix[:len(pure_txt),:]
matrix_magnify=np.zeros((matrix.shape[0]*10,matrix.shape[1]))
for i in range(matrix.shape[0]):
for j in range(10):
matrix_magnify[i*10+j,:]=matrix[i,:]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.imshow(normalize_array(np.absolute(matrix_magnify)), interpolation='nearest', cmap=plt.cm.Blues)
plt.yticks(np.arange(5, matrix.shape[0]*10, 10), pure_txt, weight='bold',fontsize=24)
plt.xticks(np.arange(0, matrix.shape[1], 50), weight='bold',fontsize=24)
plt.title('True Label: "{}" Predicted Label: "{}"'.format(text_class,pred_labels), weight='bold',fontsize=24)
plt.colorbar()
plt.show()
Please kindly correct me if I have done anything wrong with regard the code (I am not really expert in programming). Moreover, I have a question. When I re-run this code, it gives different results but with roughly same distribution of gradients. I mean, it shows higher gradients for important words and lower for redundant words but with other values and it changes every time I re-run the code for same sentence. Is there any problem with this?
Regards, Nader
Hi,
When I re-run this code, it gives different results but with roughly same distribution of gradients. I mean, it shows higher gradients for important words and lower for redundant words but with other values and it changes every time I re-run the code for same sentence. Is there any problem with this?
This could be due to the K.learning_phase(), I see here that you are providing the text_class as the learning_phase and if you have some layers such as Dropout or BatchNormalization, it's better to set it to 0 if you don't want them to work in the testing phase. BTW your code was really helpful for an implementation I was doing using Text Multi-Input models.
Hope this helps
Thanks @ssierral, I don't use text_class as learning phase, do I?
I wanted to K.set_learning_phase(0). But it gives TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a int into a Tensor. Probably, it is not possible to change the learning phase in my model. Can you please let me know how to solve this issue?
Regards, Nader
I think you set the learning phase passing a 0 to the backend function:
matrix = compute_gradients([text_sequence.reshape(1,30), 0])[0][0]
hi @raghakot Can you please share the code that you shared with @ndrmahmoudi for Visualizing Saliency Map for Sentence Classification.
Thanks, Chiranjeev
Hi @raghakot, I'm also working on something similar in Pytorch right now and would love to see the code you shared with @ndrmahmoudi for mapping saliency.
Hi @nnaliu, I could fix my own problem with the function in one of my comments above. Feel free to use it and ask any question if you have.