ProDA icon indicating copy to clipboard operation
ProDA copied to clipboard

Plotting features using UMAP

Open fabriziojpiva opened this issue 4 years ago • 4 comments

Hello, thanks for such a good contribution in the field, it is really a groundbreaking work.

I was trying to reproduce the plot of the features that you have in Figure 5 of the main manuscript using UMAP. How did you determine which features belong to those specific classes (building, traffic sign, pole, and vegetation)? We can determine from the output to which class each pixel belongs to, but how did you do it in the feature space? Resizing the logits back to the feature space shape, then argmax to determine the correspondence?

fabriziojpiva avatar Apr 29 '21 14:04 fabriziojpiva

https://github.com/microsoft/ProDA/blob/9ba80c7dbbd23ba1a126e3f4003a72f27d121a1f/calc_prototype.py#L119-L122

The network has two outputs which are feat and out, note that feat and out have the same shape. The process is as follows:

  1. Get pseudo labels by using argmax in out.
  2. For each class, select corresponding feat in pixel-level by pseudo labels, and then perfome F.adaptive_avg_pool2d in selected feat to get image-level features of each class.

super233 avatar May 02 '21 02:05 super233

2\. For each class, select corresponding `feat` in pixel-level by pseudo labels, and then perfome `F.adaptive_avg_pool2d` in selected  `feat` to get image-level features of each class.

Why is it needed to perform adaptive average pooling? To my understanding, if I were to plot features I would do the following:

  1. Get pseudo labels by using argmax in out. The resulting tensor out_argmax has a shape of [batch_size, h, w], which I flatten out into a unidimensional vector called class_ids of size [N], where N=batch_size*h*w.
  2. Reshape the features feat to match the vector of class_ids: from a feature tensor of shape [batch_size, depth, h, w] to a new shape [N, depth]. Let's call the resulting reshaped tensor feats_r.
  3. Store class_ids from 1) and feats_r from 2) into a pandas dataframe. All the class ids and reshaped features are accumulated into a pandas dataframe df with depth + 1 columns, where the first depth columns are for the features and the last one for the class ids.
  4. Use UMAP to reduce all but the last column of df, and plot the resulting embeddings using the class ids for the corresponding color of each point.

fabriziojpiva avatar Jun 21 '21 15:06 fabriziojpiva

https://github.com/microsoft/ProDA/blob/9ba80c7dbbd23ba1a126e3f4003a72f27d121a1f/calc_prototype.py#L119-L122

The network has two outputs which are feat and out, note that feat and out have the same shape. The process is as follows:

1. Get pseudo labels by using `argmax` in `out`.

2. For each class, select corresponding `feat` in pixel-level by pseudo labels, and then perfome `F.adaptive_avg_pool2d` in selected  `feat` to get image-level features of each class.

I just tried this approach, storing all these vectors s in a dataframe, and then reducing this dataframe to 2D representations using UMAP, but I obtained very dense clusters compared to the figures in the manuscript, where the point clouds look more sparse. Could you please provide more information about these feature representations:

  1. Are these features computed on the training split of Cityscapes?
  2. What parameters are used for UMAP (n_neighbors, etc.)?
  3. Are these feature vectors computed per batch or per image?

Would be glad to hear from you. Thanks!

fabriziojpiva avatar Jun 23 '21 13:06 fabriziojpiva

no reply, right?

xylzjm avatar May 19 '23 09:05 xylzjm