deepViz icon indicating copy to clipboard operation
deepViz copied to clipboard

Clustering of images based on fully-connected layer outputs

Open JoshRosen opened this issue 11 years ago • 1 comments

Even though our dataset only classifies images into 10 classes, the examples could be further divided into additional subcategories. For example, consider the images of planes. We have images of the fronts of planes on the ground: image imageimage

And images of planes flying against different colored skies: imageimageimage

Although these are all images of planes, it seems plausible that there could be a difference in the activation patterns between the two sets of images.

If we view the output of the fully-connected layers as signatures describing the images, then by computing distance metrics between these signatures we can rank the similarity of images.

Given the FC64 or FC10 outputs for all of the images, we could apply clustering techniques to identify groups of images that the network classifies similarly. We could project the results of this clustering down to two dimensions and display the images according to this clustered layout.

It would be interesting to see how this clustering evolves while the model is trained. I imagine that at first the similarity scores might reflect low-level image features, like overall color, but that over time the clusters would be based on higher-level features and some subcategories might begin to emerge.

This view could also be useful for understanding misclassifications; I've noticed that the classifier sometimes confuses dogs and horses. Given a misclassified horse, being able to see the most similar dog images might help to explain the misclassification: maybe that particular horse image is atypical and is similar to some particular subgroup of dog images.

JoshRosen avatar Dec 08 '13 20:12 JoshRosen

The latest stats_db dump should have the data for this: https://www.dropbox.com/s/t5zg9dqb0eoe6lv/stats_db.zip

This should have the required data to begin exploring different clustering techniques. Here was my first attempt at using PCA to show the model at timestep 40 (colors according to true image class), which should be a decent example of how to use the data:

>>> from deepviz_webui.model_stats_db import ModelStatsDB
>>> db = ModelStatsDB("stats_db")
>>> probs = db.get_stats(40).probs_by_image
>>> probs
array([[  1.29584700e-03,   3.57754361e-05,   5.01396321e-02, ...,
          7.13144150e-03,   4.18440730e-04,   3.63701925e-04],
       [  1.50107443e-02,   9.17336904e-03,   3.14076282e-02, ...,
          4.42980155e-02,   6.94869012e-02,   1.35898665e-01],
       [  1.14151709e-01,   3.09785362e-03,   8.43245164e-03, ...,
          9.15541202e-02,   7.15444088e-02,   2.27550700e-01],
       ...,
       [  3.99635881e-01,   6.38790429e-03,   6.37241304e-02, ...,
          6.20891619e-03,   3.05531979e-01,   5.47074946e-04],
       [  9.86086950e-02,   9.97514580e-04,   5.42566031e-02, ...,
          6.17355225e-04,   2.41912946e-01,   9.71772615e-03],
       [  1.26740802e-02,   5.66647970e-04,   6.76987618e-02, ...,
          2.52383947e-02,   1.17670270e-02,   6.42727688e-03]], dtype=float32)
>>> from deepviz_webui.imagecorpus import CIFAR10ImageCorpus
>>> corpus = CIFAR10ImageCorpus("../cifar-10-py-colmajor")
>>> from sklearn.decomposition import PCA
>>> pca = PCA(2)
>>> r = pca.fit_transform(probs)
>>> r
array([[-0.22592825, -0.03838477],
       [ 0.1176122 , -0.12934557],
       [ 0.15507133,  0.13634114],
       ...,
       [ 0.39083847,  0.14840141],
       [ 0.07537034,  0.03404486],
       [-0.135039  , -0.00479577]], dtype=float32)
>>> import pylab as pl
>>> pl.scatter(r[:, 0], r[:, 1], c=corpus._image_labels)
<matplotlib.collections.PathCollection object at 0x110cea790>
>>> pl.show()

figure_1

JoshRosen avatar Dec 10 '13 05:12 JoshRosen