online-hyperparameter-optimization
online-hyperparameter-optimization copied to clipboard
Unable to reproduce results
Hi, I'm trying to reproduce results for LUCIR with the proposed method. I used all the default parameters provided without any change except setting the kd_weight_iterations and lr_weight_iterations to 25 (Paper states T=25 policy iterations). However, it performed worse than LUCIR in both the TFH and TFS settings for CIFAR-100 (N=5). I could only get 61.87 (TFH) and 62.07(TFS) while the baseline itself performs 63.3(TFH) and 62.39(TFS). Could you please help me with this?
I also want to know about the AANets and RMM implementations. Will you be able to provide them? Thanks!
@Akila-Ayanthi
Thank you so much for your interest in our work. I will re-run the experiments and check the code. I will keep you posted when I get my results. We will also release the code for other baselines in the future.
As I just moved to a new university, it might take a little bit longer time for me to update the code for this project. I am sorry for that.
If you have any further questions, please do not hesitate to contact me.
Best,
Yaoyao
Thank you Yaoyao. Looking forward to hearing from you soon.
@yaoyao-liu
I have a question. In my understanding, you seem to be using the original validation data to calculate the accuracy for updating the policy in the code. Shouldn't it be the smalltestloader? Am I wrong?
@Akila-Ayanthi
Thanks for your questions. I will carefully check this.