keras-vis
keras-vis copied to clipboard
how to deal with multi input
My CNN architecture is similar with the following figure.
I need to update the API. There are a couple of options.
- You want to visualize attention over all inputs or input at a specific index.
- Same could be done for guided backprop as well.
I guess it is reasonable to use add an additional param input_indices
to various visualize methods that defaults to 0, take a single value or array of indices. Does that sound reasonable?
I am making this change unless you have any other use-cases to add.
It would be great to add such a param. In my case, I am trying some mutli-view architectures and I want to get the salience map of these views so that I can explore the relations among these views.
Has this enhancement been implemented?
It would indeed be great to get this enhancement working. I don't mind trying to implement it myself, I just need a tiny bit more guidance than in your single post @raghakot ...
Looking to run activation maximisation for multiple inputs.
Has anyone made progress regarding the visualization with multiple inputs?
Where would input_indices need to be included to implement this enhancement? Thanks!
@raghakot @xangma has there been any progress regarding this issue? It would be extremely helpful. I also asked on Stackoverflow, maybe somebody came up with a hack in the meantime: Multiple Inputs - SO
Is there any update on this? Is this already implemented?
More than a year after this was asked and we still don't have an answer?? Come on guys! In raghakot comment it looked like a straightforward thing to do..! I would REALLY, REALLY appreciate it if you could modify the toolbox to allow multiple inputs. I am finishing the paper I'm gonna submit to Elsevier and I would like to cite you guys in there (@raghakot)
Hi, There.
Now, I'm challenging this matter. Anyone, please give me a trained-model file with multiple inputs.
Hi @keisen, https://www.dropbox.com/s/1unw41xoivrxrgh/species_keras_Resnet50_fold4_3input.zip?dl=0 Here are the trained model (with 3 inputs) and the sample images ( input 1, input 2, and input 3 ) Thank you in advance !
@KatieHYT , Thank you so, so much ! I think this work will be completed within a few days.
Hi, @KatieHYT
I don't completed this issue yet. Now, faceing a problem of gradients calculation of Keras (or Tensorflow), so it may take a while.
This problem is occured on nested models. (However, if inputs of sub-model set to as inputs of root-model, It is not occured.) so GradCAM may be that can not be calculated with your model.
Hi @keisen , I really appreciate your help. Attached are the model and model architecture. Hope this will help :) https://drive.google.com/drive/folders/1DTbVejKfglvZH4t6xPNrBlO3bDbjdH3A?usp=sharing
Hi, There.
I made the implementation for this feature. The implementation can be to calculate multi-input model.
API specification is just @raghakot's idea ( https://github.com/raghakot/keras-vis/issues/33#issuecomment-307532238 ) which the argument of visualize_*
functions was added input_indices
.
Of course, Its default value is zero, so this feature has compatibility.
Anyone, Please try this implementation.
https://github.com/keisen/keras-vis/tree/features/%2333
@raghakot
Please make sure of the concept and API specification of this feature. Is this just what you imagined ? If it isn't, I'll reimplement this feature from scratch.
PR #128
@KatieHYT
Thank you for sharing model file. I was happy because you helped me.
But, Unfortunately, I couldn't visualized using your model, i.e., It couldn't calculate gradients. Because your model include Lambda layer between model's input tensor and three sub-models. Sorry, Could you try other models.
Regard.
Now, faceing a problem of gradients calculation of Keras (or Tensorflow),
Since I could not find a workaround, I gave up this problem. Nested models don't support, because may no longer able to calculate gradients with Keras.
Thank you so so much, @keisen ! :)
Hi,
I'm trying to use visualize_saliency on a network that takes two inputs: one is a 3D image and the other is a 1D vector. I'd like to visualize saliency on the 3D image. I'm confused about the usage of the parameters 'wrt_tensor' and 'input_indices'. Could you provide an example?
In my case, the 3D image is input_1 and the 1D vector is input_2.
Thanks
Hi,
I'm trying to use visualize_saliency on a network that takes two inputs: one is a 3D image and the other is a 1D vector. I'd like to visualize saliency on the 3D image. I'm confused about the usage of the parameters 'wrt_tensor' and 'input_indices'. Could you provide an example?
In my case, the 3D image is input_1 and the 1D vector is input_2.
Thanks
I don't think it's possible. I read the source code, the input to Optimizer
is always model.input
which is a list of input tensors. The only work around will be removing one of the inputs. Though I don't know how to achieve this (should be something like to feeding a constant input to the Input tensor).