ContrastiveSeg icon indicating copy to clipboard operation
ContrastiveSeg copied to clipboard

Momentum Update

Open Macc520 opened this issue 1 year ago • 2 comments

First of all, I would like to say that this is a very milestone work. But I need to solve several problems so that I can better understand it.

  1. In loss_ contras_ mem file, I only see encode_ q. Is it actually encode_ K is not used in the paper.

  2. What is the difference between ContrastAuxCELoss and ContrastCELoss.

Looking forward to your reply. Thank you very much!!!!!!

Macc520 avatar Sep 22 '22 02:09 Macc520

Hi @Macc520 , 1) we did not use encode_K as has been used in MoCo. Adding another momentum encoder does not lead to obvious gains in performance, but will significantly increase the memory and computation costs. 2) Aux means an auxiliary cross-entropy loss which is added to intermediate features. With it, the final loss will has a form of ce loss + 0.4 * auxiliary ce loss. It is a trick widely used in semantic segmentation training.

tfzhou avatar Oct 05 '22 20:10 tfzhou

Thank you very much for your reply. It's a lot of confusion for me to solve.

In addition, I would like to ask that I have significantly improved the use of your method for medical image segmentation (U-Net), but I have changed the number of categories and abandoned the sampling of image pixels and sent all pixels into the queue (stride=2 in the original paper). Because the task of medical image is to predict all pixels intensively. Last loss as:CrossEntopy2D +0.01Contrastive.

I'm writing a paper. I want to ask if this is an innovative work. I look forward to your reply! Thanks!!! ------------------ 原始邮件 ------------------ 发件人: "tfzhou/ContrastiveSeg" @.>; 发送时间: 2022年10月6日(星期四) 凌晨4:37 @.>; @.@.>; 主题: Re: [tfzhou/ContrastiveSeg] Momentum Update (Issue #57)

Hi @Macc520 , 1) we did not use encode_K as has been used in MoCo. Adding another momentum encoder does not lead to obvious gains in performance, but will significantly increase the memory and computation costs. 2) Aux means an auxiliary cross-entropy loss which is added to intermediate features. With it, the final loss will has a form of ce loss + 0.4 * auxiliary ce loss. It is a trick widely used in semantic segmentation training.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

Macc520 avatar Oct 06 '22 00:10 Macc520