ContrastiveSeg icon indicating copy to clipboard operation
ContrastiveSeg copied to clipboard

loss_contrast

Open Summer77723 opened this issue 2 years ago • 2 comments

Hello. Thanks for your great work. I have some questions about the code,what is the meaning of with_embed,Why is loss_contrast not used? 1.if with_embed is True: return loss + self.loss_weight * loss_contrast

    return loss + 0 * loss_contrast  # just a trick to avoid errors in distributed training
  1.        if is_distributed():
             import torch.distributed as dist
             def reduce_tensor(inp):
                 """
                 Reduce the loss from all processes so that 
                 process with rank 0 has the averaged results.
                 """
                 world_size = get_world_size()
                 if world_size < 2:
                     return inp
                 with torch.no_grad():
                     reduced_inp = inp
                     dist.reduce(reduced_inp, dst=0)
                 return reduced_inp
    
             loss = self.pixel_loss(outputs, targets, with_embed=with_embed)  
    
             backward_loss = loss
             display_loss = reduce_tensor(backward_loss) / get_world_size()
         else:
             backward_loss = display_loss = self.pixel_loss(outputs, targets)
    

Looking forword to your reply!

Summer77723 avatar Aug 10 '22 00:08 Summer77723

Hi, @Summer77723, our code has a warmup stage, in which the cotrastive loss is not applied, i.e., the weight of contrastive loss is zero.

tfzhou avatar Sep 01 '22 07:09 tfzhou

Thank you for your reply.

Summer77723 avatar Oct 04 '22 04:10 Summer77723