Xingyi Yang
Xingyi Yang
@Re-dot-art No need to do separate training. ConsistentTeacher enables end-to-end training, which means that the labeled data and unlabeled data are fed to the model at the same time. The...
As mentioned in README, all experiments in the paper use 8gpux5sample-per-gpu for training. Smaller bs gets worse results as expected. But your results seems to be too low, which even...
Could you please share your configuration settings, the scripts you're using for execution, and the method you're employing to process the dataset? This results is even lower than baselines that...
Do you use wandb to record the training process? If yes, can you also share?
I would suggest follow this config to (1) increase batch size (2) increase the number of labeled sample within a batch (3) and lower your learning rate https://github.com/Adamdad/ConsistentTeacher/blob/main/configs/consistent-teacher/consistent_teacher_r50_fpn_coco_180k_10p_2x8.py For 2...
Sorry, but this seems not to be a problem of our code but a problem of multiprocessing in python. Could you please provide details about your running environment, the commands...
I would say it's possible, although I haven't tried it myself. There might be some technical challenges that I'm not completely aware of.
Dear @Ace-blue , You error seems to be a `mmcv` installation error. I would suggest to install `mmcv` from source to get rid of the incomplete installation problem. Best
Yes, it should be different across dataset. This argument is used in the distributed data sampler. https://github.com/Adamdad/ConsistentTeacher/blob/1fa64775d93976d9b4ceffa4a2ee7d10a5c50c29/ssod/datasets/samplers/semi_sampler.py#L211 Basically, it controls the length of your epoch length when loading data. As...
Dear @BowieHsu, Your recognition of our paper is greatly appreciated. We understand that the peer-review process can be challenging. However, we are committed to producing the best possible paper through...