mean-teacher icon indicating copy to clipboard operation
mean-teacher copied to clipboard

applying mean teacher to my own dataset

Open liangzimei opened this issue 6 years ago • 5 comments

hi, i have already achieve ~94% with 4000 labels on cifar10. But for my own three classification task, i have 160k labelled data and unlabelled data. i can not get expected results( worse than using the labelled data to train directly ). is lr strategy sensitive to it? here is my setting (finetuning on mobilenet-v1 and using 4 gpus): thanks in advance

defaults = {

    # Technical details
    'workers': 20,
    'checkpoint_epochs': 20,
    'evaluation_epochs': 5,

    # Data
    'dataset': 'my dataset',
    'train_subdir': 'train',
    'eval_subdir': 'test',

    # Data sampling
    'base_batch_size': 100,
    'base_labeled_batch_size': 50,

    # Architecture
    'arch': 'mnet1',

    # Costs
    'consistency_type': 'mse',
    'consistency_rampup': 5,
    'consistency': 20.0,
    'logit_distance_cost': .01,
    'weight_decay': 2e-4,

    # Optimization
    'lr_rampup': 0,
    'base_lr': 0.001,
    'nesterov': True,
}

liangzimei avatar Sep 03 '18 06:09 liangzimei

Same problem here.

YilinLiu97 avatar Feb 13 '19 07:02 YilinLiu97

Did you fix the problem?

fahad7033 avatar Apr 08 '20 22:04 fahad7033

@fahad7033 i haven't done it in a long time, you can try to train more iters.

liangzimei avatar Apr 17 '20 03:04 liangzimei

hi, i have already achieve ~94% with 4000 labels on cifar10. But for my own three classification task, i have 160k labelled data and unlabelled data. i can not get expected results( worse than using the labelled data to train directly ). is lr strategy sensitive to it? here is my setting (finetuning on mobilenet-v1 and using 4 gpus): thanks in advance

defaults = {

    # Technical details
    'workers': 20,
    'checkpoint_epochs': 20,
    'evaluation_epochs': 5,

    # Data
    'dataset': 'my dataset',
    'train_subdir': 'train',
    'eval_subdir': 'test',

    # Data sampling
    'base_batch_size': 100,
    'base_labeled_batch_size': 50,

    # Architecture
    'arch': 'mnet1',

    # Costs
    'consistency_type': 'mse',
    'consistency_rampup': 5,
    'consistency': 20.0,
    'logit_distance_cost': .01,
    'weight_decay': 2e-4,

    # Optimization
    'lr_rampup': 0,
    'base_lr': 0.001,
    'nesterov': True,
}

So I wonder how to create my own datasets.......

jetoHui520 avatar May 26 '22 10:05 jetoHui520

I have the same problem about how to apply the mean teacher to my own dataset(especially for audio) and model.

paozhuanis1 avatar Mar 08 '24 03:03 paozhuanis1