Hao

Results 9 comments of Hao

the results on cifar10 & cifar100 are fine-tuned from the pretrained weights

The cifar results are fine-tuned from the pretrained weights, not training from scratch

which algorithm you observe the performance drop after resume?

Can you provide the log file for training (and resume in training) for more information?

Hi, we will add demonstration for custom nlp data. But currently only CV dataset is supported. Currently the easiest way is to use your own Custom Dataset for NLP data...

You can reference the dataset we used for nlp (https://github.com/microsoft/Semi-supervised-learning/blob/main/semilearn/datasets/nlp_datasets/datasetbase.py) for your dataset. To run the algorithms on custom dataset, you can refer this notebook (https://github.com/microsoft/Semi-supervised-learning/blob/main/notebooks/Custom_Dataset.ipynb). You only need to...

The index 'idx_ulb' used to update selected_label is the index in original dataset, not the batch index. You can check dataset code to verify this: https://github.com/TorchSSL/TorchSSL/blob/f2f46076cbea1b6f6c9b3c1c45609502c6576250/datasets/dataset.py#L87

If you are using distributed training, i.e., multiprocessing_distributed set to True, num_train_iter and epoch jointly determine the training iterations per epoch as num_train_iter // epoch. If multiprocessing_distributed is set to...