alpha_mix_active_learning
alpha_mix_active_learning copied to clipboard
Error when using other model which is not mlp
Dear authors,
python main.py
--data_name CIFAR100 --data_dir dataset --log_dir log_output
--n_init_lb 1000 --n_query 1000 --n_round 10 --learning_rate 0.001 --n_epoch 50 --model vit_small
--strategy All --alpha_opt --alpha_closed_form_approx --alpha_cap 0.2 --pretrained_model
Is that ok to keep alpha_cap 0.2 for vit_small model?
Idk why the progam died in the middle and accuracy is so weird. It is better if the authors can provide script to run all datasets with different settings.
"0.2529","0.0"
"0.302","0.0010182857513427734"
"0.3343","0.0009670257568359375"
"0.3441","0.0009663105010986328"
"0.3327","0.0010838508605957031"
Please set --stratey=AlphaMixSampling so that you run our AL approach (i.e. ALFA-Mix). When using "All", the code tries to run all the baselines (except ALFA-Mix) sequentially, starting from Random Sampling. In the experiments reported in the paper, we used 0.2 for alpha_cap.