ProDA
ProDA copied to clipboard
Plotting features using UMAP
Hello, thanks for such a good contribution in the field, it is really a groundbreaking work.
I was trying to reproduce the plot of the features that you have in Figure 5 of the main manuscript using UMAP. How did you determine which features belong to those specific classes (building, traffic sign, pole, and vegetation)? We can determine from the output to which class each pixel belongs to, but how did you do it in the feature space? Resizing the logits back to the feature space shape, then argmax to determine the correspondence?
https://github.com/microsoft/ProDA/blob/9ba80c7dbbd23ba1a126e3f4003a72f27d121a1f/calc_prototype.py#L119-L122
The network has two outputs which are feat and out, note that feat and out have the same shape. The process is as follows:
- Get pseudo labels by using
argmaxinout. - For each class, select corresponding
featin pixel-level by pseudo labels, and then perfomeF.adaptive_avg_pool2din selectedfeatto get image-level features of each class.
2\. For each class, select corresponding `feat` in pixel-level by pseudo labels, and then perfome `F.adaptive_avg_pool2d` in selected `feat` to get image-level features of each class.
Why is it needed to perform adaptive average pooling? To my understanding, if I were to plot features I would do the following:
- Get pseudo labels by using
argmaxinout. The resulting tensorout_argmaxhas a shape of[batch_size, h, w], which I flatten out into a unidimensional vector calledclass_idsof size[N], whereN=batch_size*h*w. - Reshape the features
featto match the vector ofclass_ids: from a feature tensor of shape[batch_size, depth, h, w]to a new shape[N, depth]. Let's call the resulting reshaped tensorfeats_r. - Store
class_idsfrom 1) andfeats_rfrom 2) into a pandas dataframe. All the class ids and reshaped features are accumulated into a pandas dataframedfwithdepth + 1columns, where the firstdepthcolumns are for the features and the last one for the class ids. - Use UMAP to reduce all but the last column of
df, and plot the resulting embeddings using the class ids for the corresponding color of each point.
https://github.com/microsoft/ProDA/blob/9ba80c7dbbd23ba1a126e3f4003a72f27d121a1f/calc_prototype.py#L119-L122
The network has two outputs which are
featandout, note thatfeatandouthave the same shape. The process is as follows:1. Get pseudo labels by using `argmax` in `out`. 2. For each class, select corresponding `feat` in pixel-level by pseudo labels, and then perfome `F.adaptive_avg_pool2d` in selected `feat` to get image-level features of each class.
I just tried this approach, storing all these vectors s in a dataframe, and then reducing this dataframe to 2D representations using UMAP, but I obtained very dense clusters compared to the figures in the manuscript, where the point clouds look more sparse. Could you please provide more information about these feature representations:
- Are these features computed on the training split of Cityscapes?
- What parameters are used for UMAP (n_neighbors, etc.)?
- Are these feature vectors computed per batch or per image?
Would be glad to hear from you. Thanks!
no reply, right?