deep-visualization-toolbox
deep-visualization-toolbox copied to clipboard
does this a display tool or generate visualize image tool?
I've run this on my server, it occurs this problem: (Deep Visualization Toolbox:32051): Gtk-WARNING **: cannot open display:
So I have a question here does this a simple display visualization images tool or generate visualization images tool for DL? I havn't go into details in the code.
Hi @sunbaigui, the ./run_toolbox.py command is only used to load the GUI to display the visualizations, so it makes most sense to run it locally. The code for generating the visualization images for one's own network are still in the process of being posted (sorry for the slowness).
If you'd like to get started extracting the max images and max deconv images for your own network, you could consider running my existing scripts yourself, but admittedly without proper documentation, and with some parts hard-coded to my own setup. You'll probably have to hack a few pieces to suit your own setup. Here's everything you'll need:
https://gist.github.com/yosinski/285bc9ed2fd35cba2c76 find_max_acts.py
https://gist.github.com/yosinski/daecbd3a6db824295596 crop_max_patches.py
https://gist.github.com/yosinski/9051b60dc68c4cb0bad4 max_tracker.py
https://gist.github.com/yosinski/9958944c5bf08bd93ba2 loaders.py
https://gist.github.com/yosinski/8f7da9ce60c17d9e8d82 jby_misc.py
https://gist.github.com/yosinski/6c8167195b101c386ee5 caffe_misc.py
Step 1: run ./find_max_acts.py
to go through a set of images and record which produce the maximal response for each channel/unit. This produces a file called something like maxes.pkl
Step 2: run ./crop_max_patches.py
to load maxes.pkl
as well as whatever images are required, then save the appropriate patches and the deconv of those patches to a directory.
Step 3: point your settings.py
to that new directory.
Note: these scripts will only generate the bottom two types of visualization (9 top / 9 deconv). To generate the 9 optimized images requires a bit more code, which will take me longer to extract. I hope to get to that soon.
:+1: for a procedure to visualize "your own network"
I like this tool too, waiting for custom network visualization :+1:
@yosinski is it possible to have a small example showing the generation of the deconv image of an input image with your modified branch of caffe?
Do you know why the following code isn't working properly: https://gist.github.com/Trekky12/e99fdc52de0ec4fb77ee
Thanks in advance!
@Trekky12
is it possible to have a small example showing the generation of the deconv image of an input image with your modified branch of caffe?
Sure, here's where the deconv is computed: https://github.com/yosinski/deep-visualization-toolbox/blob/a74181698933498bf11096895dd971f47234da84/caffevis/app.py#L151-L162
And here's where it's plotted: https://github.com/yosinski/deep-visualization-toolbox/blob/a74181698933498bf11096895dd971f47234da84/caffevis/app.py#L988-L1022
Do you know why the following code isn't working properly: https://gist.github.com/Trekky12/e99fdc52de0ec4fb77ee
I'm not sure; what error do you get?
@yosinski thank you for your answer. The error was a totally gray image instead some patterns in the image. In the meantime I found the problem.
The method caffe.io.load_image(..) doesn't rescale the image so the net needs to be initialized with a raw_scale of 255 instead of 1.0. With cv2.read(..) the raw_scale can be 1.0
@yosinski , what is the 'layer' argument that you mention in crop_max_patches.py?
parser.add_argument('layer', type = str, help = 'Which layer to output')
@yosinski sir, can you plz guide for generating "regularized_plot" for those images as we generated the max images and the deconv for those images.
@bhack @yosinski the above three steps to visualize my own network also have some problems:
running python ./crop_max_patches.py ./output.pkl ./deploy.txt ./caffemodel ./img_val ./val.txt ./unit_jpg/vis/max_im conv2
i got en error File "max_tracker.py", line 337, in output_max_patches out_arr[:, out_ii_start:out_ii_end, out_jj_start:out_jj_end] = net.blobs['data'].data[0,:,data_ii_start:data_ii_end,data_jj_start:data_jj_end] ValueError: could not broadcast input array from shape (3,10,0) into shape (3,10,9)
do you know why?ths~
@visonpon did you solve this problem...I have exactly the same problem as you.
@visonpon@pawanhsu I also have the same problem. Did you solve it ?
Yeah. that is because the imagenet example in caffe has default resolution size 256. And it feed in the model with 207 cropped image. However in my case my image doesn't do any cropping before i feed in the model to train. those the padding will causing the overflow in matrix.
@pawanhsu Thanks for your reminder, i have solved it! In file of caffe_misc.py, init(self) of class RegionComputer(object) used "Woefully hardcoded" just for caffenet. So, it is necessary to rewrite it according to the new network. @visonpon
@bhack, @mtamburrano, @Trekky12 I've made the tool substantially more generic: all the hardcoded parts where replaced with new settings parameters, most are deduced automatically by analyzing the network definition file
@visonpon, @pawanhsu, @qigreen I've also fixed numerous bugs, some of them in the receptive field calculations which seems related to the mentioned error
Latest version is in https://github.com/arikpoz/deep-visualization-toolbox