MIC
MIC copied to clipboard
[CVPR23] Official Implementation of MIC: Masked Image Consistency for Context-Enhanced Domain Adaptation
I use RTX 2080 Ti with 11GB memory but encountered CUDA out of memory. How much memory is needed per batch size 1?
We greatly appreciate your amazing work; we are utilizing your model for our task, however, we are experiencing a lot of noise in the road area while using the GTA...
I followed the recommends of issue #8 and #9 to set the cuda 11.0 and random seed 2, but still got a lower result of 74.7 mIoU. So I compare...
I was trying to setup the environment for segmentation on my PC which has a Nvidia RTX 4070Ti graphics card. My graphics card is supported by CUDA versions 11.8+ only....
I observe that MIC primarily relies on randomly generated masks during training. The training process of MIC involves not only the random generation of masks but also the random mixing...
I have a custom data. With the MIC model, I got 22% accuracy, while I ran my custom data with 300 iterations in the Fester-RCNN (Detectron 2) model, so that...
Thank you for your excellent work! Due to machine limitations, I cannot reproduce your results on a single GPU. How can I perform distributed training?
Hi, this work is amzing. In MIC, image after masking is directly fed into student-teacher network. However, the masked areas will influence the convolution process and leads to much noise....
Dear authors, Thank you very much for your contribution, I am very interested in this work of yours. However, I'm having some problems reproducing the code. I am using a...
https://github.com/lhoyer/MIC/blob/2f932a98b5dd9f598aaeb32411863ceea0809314/seg/mmseg/apis/train.py#L71 In case of parallel training, then DCAS.clas will be wrapped by `DistributedDataParallelWrapper`, where the model will be wrapped by `MMDistributedDataParallel`, by reading the `MMDistributedDataParallel` source code, it can be...