MVSS-Net
MVSS-Net copied to clipboard
Questions about the detail of ESB.
Hi, Thanks for your great work. I read your paper and released code these days, and I have questions about the Edge-Supervised Branch. In your paper, the predicted manipulation edge map, denoted as {Gedge(xi)}, obtained by transforming the output of the last ERB with a sigmoid layer. But in your code, I didn't see the sigmoid layer, only the last ERB as the output. if self.sobel: res1 = self.erb_db_1(run_sobel(self.sobel_x1, self.sobel_y1, c1)) res1 = self.erb_trans_1(res1 + self.upsample(self.erb_db_2(run_sobel(self.sobel_x2, self.sobel_y2, c2)))) res1 = self.erb_trans_2(res1 + self.upsample_4(self.erb_db_3(run_sobel(self.sobel_x3, self.sobel_y3, c3)))) res1 = self.erb_trans_3(res1 + self.upsample_4(self.erb_db_4(run_sobel(self.sobel_x4, self.sobel_y4, c4))), relu=False)
else:
res1 = self.erb_db_1(c1)
res1 = self.erb_trans_1(res1 + self.upsample(self.erb_db_2(c2)))
res1 = self.erb_trans_2(res1 + self.upsample_4(self.erb_db_3(c3)))
res1 = self.erb_trans_3(res1 + self.upsample_4(self.erb_db_4(c4)), relu=False)
if self.constrain:
x = rgb2gray(x)
x = self.constrain_conv(x)
constrain_features, _ = self.noise_extractor.base_forward(x)
constrain_feature = constrain_features[-1]
c4 = torch.cat([c4, constrain_feature], dim=1)
outputs = []
x = self.head(c4)
x0 = F.interpolate(x[0], size, mode='bilinear', align_corners=True)
outputs.append(x0)
if self.aux:
x1 = F.interpolate(x[1], size, mode='bilinear', align_corners=True)
x2 = F.interpolate(x[2], size, mode='bilinear', align_corners=True)
outputs.append(x1)
outputs.append(x2)
return res1, x0
I think 'res1' is the output, with no sigmoid layer. Did I miss something? Could you help me? Thank you very much. Best regards.
Indeed it is.
We tried to use Cross-Entropy-like losses( BCE, Focal, ...) in early experiments, and sigmoid is normally contained in the implement of loss function. So there are no last sigmoids in the model, and such implementation is kept for convenience when loss_func changes.
As the edge branch is used for training supervision but not refinement on output, its' sigmoid doesn't appear in the testing part. But outputs' still can be found here.
https://github.com/dong03/MVSS-Net/blob/306240fd48fc9fe5091d1f5181942bf3ac06cab3/common/tools.py#L27
Best regards.
Thanks. But I have another question, , seg = run_model(model, img) seg = torch.sigmoid(seg).detach().cpu() I think the first returned value '' is {Gedge(xi)} , the second returned value 'seg' is the pred mask. so the first returned value should be put into sigmoid layer, that is to say, torch.sigmoid(_).detach().cpu(). It's really confused me.
Best regards.
Indeed the first "_" is predicted edge, but edge map is no longer needed during inference (as I said, just for supervision during training), so we use variable name "_", indicating it's just a placeholder.