keras-vis
keras-vis copied to clipboard
TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'
from keras.applications import VGG16
from vis.utils import utils
from keras import activations
from vis.visualization import visualize_activation
# from vis.backend import sel
model = VGG16(weights="imagenet",include_top=False)
layer_idx = utils.find_layer_idx(model, 'block5_conv3')
img = visualize_activation(model,layer_idx,filter_indices=20)
I tried to run the above code,But I got the following error
I am running on Windows10, with keras-vis version 0.5.0
I don't know where I made a mistake. I hope some kind people can help me. Thank you very much
Traceback (most recent call last):
File "D:/DLCode/amdemo/layerAMVersion2.py", line 12, in
Hi, @DaQiZi . Can I ask a question about your code.
model = VGG16(weights="imagenet",include_top=False)
Is include=False
what you intended?
When include_top=True
, the input shape of model
is (?, 224, 224, 3)
.
But when include_top=False
, the input shape is (?, ?, ?, 3)
. i.e., It's unfixed.
visualize_activation
is a function that create a input image that maximize the loss.
So, it need a model whose input shape is fixed.
When
include_top=True
, the input shape ofmodel
is(?, 224, 224, 3)
. But wheninclude_top=False
, the input shape is(?, ?, ?, 3)
. i.e., It's unfixed.
visualize_activation
is a function that create a input image that maximize the loss. So, it need a model whose input shape is fixed.
Thank you. I got it. My goal is, I want the output of a certain layer of the model to be as close as possible to the specified value
In other words, I want to customize a loss function, where g(xi) of the loss function refers to the output of a convolution layer, not the output of a filter of a convolution layer.I didn't get very good results with Max activation myself, so I used this keras-vis package.Can the keras-vis
package do this?I don't know much about it
My goal is, I want the output of a certain layer of the model to be as close as possible to the specified value
It seems that normal loss-functions like keras.losses.*
will achieve it .
Do I exactly make out it?
If not, Could you please explain us the specified value
.
I'm encountering the same issues. I am using a VGG16 model
with include_top=False
because I need a Globale Average Pooling layer at the end, followed by a Dense layer with 1 output neuron for a binary classification.
After I train it and want to visualise the activations with the following code:
from vis.utils import utils
from keras import activations
# Build the VGG16 network with ImageNet weights
# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'dense_4')
# Swap softmax with linear
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
And:
from vis.visualization import visualize_activation
plt.rcParams['figure.figsize'] = (18, 6)
img2 = visualize_activation(model, layer_idx)
plt.imshow(img2)
I get this Error message:
TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'
My goal is, I want the output of a certain layer of the model to be as close as possible to the specified value
It seems that normal loss-functions like
keras.losses.*
will achieve it . Do I exactly make out it? If not, Could you please explain usthe specified value
.
The formula I'm talking about is actually inversion. I noticed that the author tagged the inversion to be ready, but I couldn't find it.