MMT icon indicating copy to clipboard operation
MMT copied to clipboard

A question about clustering

Open CaptainPrice12 opened this issue 3 years ago • 4 comments

Thank you for sharing this work!

Actually, I have a question about the clustering process in mmt_train_kmeans.py file.

dict_f, _ = extract_features(model_1_ema, cluster_loader, print_freq=50) cf_1 = torch.stack(list(dict_f.values())).numpy() dict_f, _ = extract_features(model_2_ema, cluster_loader, print_freq=50) cf_2 = torch.stack(list(dict_f.values())).numpy() cf = (cf_1+cf_2)/2

Here, I find mean-nets of model1 and model2 are used to generate features for clustering and initializing classifiers. But it seems that current models( model1 and model2) in every epoch are used instead of mean-net to compute features for clustering in the MMT paper. Does using mean-net here provide better performance? Could you give some explanations about it? Thanks!

CaptainPrice12 avatar Mar 30 '21 03:03 CaptainPrice12

Yes, we adopted mean-nets to extract features for clustering in each epoch. The features extracted by mean-nets are more robust. I did not remember the performance gaps. You could have a try to replace it with current nets.

yxgeee avatar Mar 30 '21 12:03 yxgeee

Got it. Thank you so much for the reply! May I ask one more question?

In the OpenUnReID repo, the implementation of MMT should be MMT+, right? In that repo, source_pretrain/main.py uses only source domain data for pretraining, not like using both source and target(only forward) in the MMT repo.

Meanwhile, for target training, it uses both source(labeled) and target(pseudo label) to conduct MMT training in OpenUnReID with a MoCo-based loss and DSBN, right? Please correct me if there is a misunderstanding. Thanks!

CaptainPrice12 avatar Mar 30 '21 17:03 CaptainPrice12

MMT+ in OpenUnReID does not adopt source-domain pre-training. It uses two domains' images to conduct MMT training from scratch (ImageNet pre-trained). The MoCo loss is not adopted and DSBN is used. The MoCo loss only appears in the repo for VisDA challenge.

yxgeee avatar Mar 31 '21 13:03 yxgeee

Thanks for the help!

CaptainPrice12 avatar Mar 31 '21 15:03 CaptainPrice12