Wenguan Wang
Wenguan Wang
@gaowq2017 pls read the paper more carefully. As we mentioned, our work shows the promise for formulating general segmentation and few-shot in a unified view of prototype learning. Ps: Prototype...
@XiaoxxWang Just read the code more carefully... For, InforNCE, you also have the groundtruth -- you know which one is the positive sample, and which one is the negative sample,...
@prashnani Thanks for your interest. As the data and code are generated/modified some times, I cannot figure out the exact reason. Maybe you can directly use the generated fixation maps.
@KID0203 Sorry for the inconvenience, but it was done three years ago. Some parameter settings may be different, and I really cannot find the original ones : (
@rederoth Oh, thanks for your remind. I thought I have released all the attribute annotations. Let me make a check, to see if I still have the attribute annotations for...
@chwoong The prototypes are viewed as non-learnable parameters, as they are computed as the mean of a group of feature representations. The parameters of the feature extractors are learnable parameters,...
@chwoong Yes, here we mean learnable parameters : -)
@RenLibo-aircas thanks for your interest. Basically, the equipartition constraint (see Eq. 8) in Sinkhorn-Knopp iteration can improve the diversity of the cluster centers. But in practice, case-by-case consideration is needed....
@clgx00 Within-class clustering is unsupervised -- for each class c, we need to automatically find K prototypes. However, the whole task setting is supervised -- for each pixel, we know...
@clgx00 Of course not used. The clustering is to find class prototypes as references for classification. Once trained, the class prototypes are stored and used for classification.