lucid
lucid copied to clipboard
Fully Connected Network Layer does not support using direction
I tried to use the objectives. direction () function on the full connection layer of Inception V1, but it would report an error.
InvalidArgumentErrorTraceback (most recent call last)
<ipython-input-17-d5f96d6202ac> in <module>()
4
5 obj = objectives.direction("softmax2_pre_activation",np.random.randn(1008))
----> 6 img = render.render_vis(model, obj)
4 frames
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
1368 pass
1369 message = error_interpolation.interpolate(message, self._graph)
-> 1370 raise type(e)(node_def, op, message)
1371
1372 def _extend_graph(self):
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Computed output size would be negative: -1 [input_size: 5, effective_filter_size: 7, stride: 1]
[[node import/avgpool0 (defined at /usr/local/lib/python2.7/dist-packages/lucid/modelzoo/vision_base.py:142) ]]
(1) Invalid argument: Computed output size would be negative: -1 [input_size: 5, effective_filter_size: 7, stride: 1]
[[node import/avgpool0 (defined at /usr/local/lib/python2.7/dist-packages/lucid/modelzoo/vision_base.py:142) ]]
[[Mean/_35]]
0 successful operations.
Is this function not supported on the full connection layer?
The purpose of using direction is to invert the input image using the activation vector of the full connection layer. If the direction function can not achieve this goal, what should I do?