yulu0724

Results 7 comments of yulu0724

In Table 1 we show results of 'E-pre', it is indeed sometimes better than fine-tuning with metric leaning. Applying other methods(LWF, MAS, EWC) and our SDC could over perform the...

We use lr 1e-6, 50 epochs for CUB-200-2011 (100 classes) and Caltech-101 (Fig.6). The parameters of other datasets are shown in 'Implementation Details' part of the paper.

> We follow the LwM paper and use the first 100 for CUB-200 in Fig6. For CUB and Caltech we use the pretrained Imagenet.

> Hi, > > Which part of the code compute the approximation of semantic drift compensation? I mean, equation (10) and (11) in the paper. > > https://openaccess.thecvf.com/content_CVPR_2020/papers/Yu_Semantic_Drift_Compensation_for_Class-Incremental_Learning_CVPR_2020_paper.pdf > >...

Hi Gregoire, for cifar and imagenet-sub, we use triplet_no_hard_mining as the loss.

Hi authors, Did you update the code for evaluating the val set finally? I still got some errors when testing on val? What shall I change? Thanks! Best, Lu

> > Hi authors, > > Did you update the code for evaluating the val set finally? I still got some errors when testing on val? > > What shall...