TorchSemiSeg icon indicating copy to clipboard operation
TorchSemiSeg copied to clipboard

Labeled and Unlabeled dataloaders

Open nysp78 opened this issue 2 years ago • 1 comments

Hello, I want to ask how you handle the dataloading when the unlabeled data is more than the labeled. I have read a couple of approaches for this. The first is, as you have done, to define an epoch as the passing of all unlabeled data from the network, but with this the labeled data will be passed from the network multiple times in an epoch. The second approach is to use a sampler to sample at each training step equal amount of unlabeled data to match the size of the labeled data, so to have 2 dataloaders with the same size. Which of these 2 techniques would force the model to perform better? Generally , I'm a bit confused on how to construct the dataloaders of labeled and unlabeled data in a semi supervised setting. Any hints will be appreciated!

Thanks in advance.

nysp78 avatar Oct 08 '22 15:10 nysp78

Hi, I recommend using the first one: define an epoch as the passing of all unlabeled data from the network.

To construct the dataloaders, I use the argument config.max_samples to specify the maximum samples in an epoch. https://github.com/charlesCXK/TorchSemiSeg/blob/f67b37362ad019570fe48c5884187ea85f2cc045/exp.voc/voc8.res50v3%2B.CPS/dataloader.py#L82

The labeled set will be sampled repeatedly until the maximum number of samples is met.

charlesCXK avatar Oct 18 '22 12:10 charlesCXK