Yonglong Tian
Yonglong Tian
Hi, @ramprs, do you mean both the backbone network as well as the linear classifier? I am planning to share it at some point later.
Hi, @jizongFox , I will keep updating it.
Hey @WOWNICE, Thank you very much for spotting this! I have fixed it and will update it to the next version.
Hi, @alldbi, Thanks for your interest! The `view learning` experiments will be released in a separate repo later.
@haohang96, > Is there an official InfoMin configs (detail args) provided for 73.0% results? Appending `--epochs 800 --cosine` besides specifying `--method InfoMin` > In addition, can you provide the speed...
> From the paper, RandAugmentation is used for pre-training. are the parameters of rand_augmentation inherited from the imagenet supervised classification task? or are the parameters from the parameter searching based...
Hi, @alceubissoto, For color space transferring, it's [here](https://github.com/HobbitLong/PyContrast/blob/master/pycontrast/datasets/util.py#L241). For channel splitting, it's [here](https://github.com/HobbitLong/PyContrast/blob/master/pycontrast/networks/build_backbone.py#L132). Let me know if you have further question.
@xwang-icsi, Yes. Using RA gives 72.97, while muting RA (i.e., using `NULL`) gives 72.92.
@bchao1, > From my understanding, the view generator aims to minimize the MI between views, and the feature extractor aims to maximize MI. However, wouldn't this simply degenerate to the...
Hi, @haohang96, > The temperature in infomin is 0.15, could you present performance of temperature=0.2 ? How big is the gap between them? It's typically small for low epochs training,...