see
see copied to clipboard
Textrec model test Backprop visualization
I want to get the result of Backprop visualization from the text-recognition demo because I need to use the pixels of the words in the picture. The ideal output of a given picture input is a black-white picture of the same size that has white space representing the characters. Is there any way I can get them?
Yes it is not problem to get those images:
The Variable
that is used to create those visualizations is called vis_anchor
(https://github.com/Bartzi/see/blob/master/chainer/models/svhn.py#L52).
Visual Backprop is performed by the BBoxPlotter (https://github.com/Bartzi/see/blob/master/chainer/insights/bbox_plotter.py#L126). The BBoxPlotter uses this code to do visual backprop. You should be able to use this information to generate the pictures on your own,