MVSS-Net icon indicating copy to clipboard operation
MVSS-Net copied to clipboard

Questions about the Bayar Convolution(i.e. Constrainted Convolution) Layer in Noise Sensitive Branch.

Open Codefmeister opened this issue 3 years ago • 6 comments

Hi! Sorry to disturb you again. I download the pretrain model from your link, and I found that the constraint conv layer only have one param and it's size is [1, 3, 24]. But I thought that it would be [3, 1, 5, 5] according to your figure. Could you plz figure out where the wrong lied in? Thanks sincerely. image image

15:32 Add Does it mean: the center pixel of the conv kernel is always -1. And you implemented it by a array, which is further filled into the kernel matrix?

Codefmeister avatar Jul 27 '21 06:07 Codefmeister

1, we transfer input image from bgr to gray, so the input channel is 1. and 2, according to Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection , the center element of constrained-conv (aka, BayarConv) is constrained to be -1, thus only kernel**2-1 parameters is trainable.
That is how we get [1, 3, 24]. Implement details of these modules will be included in futher released code.

dong03 avatar Jul 27 '21 11:07 dong03

I use constrained convolution to get a different picture after my own training from the weight file constrained convolution. May I ask what is the reason for this? image

image

chchshshhh avatar Sep 09 '21 01:09 chchshshhh

I use constrained convolution to get a different picture after my own training from the weight file constrained convolution. May I ask what is the reason for this?

Sorry I didn't get it. Could you explain how you get the two pictures concretely?Are they from the same original image?

Chenxr1999 avatar Sep 10 '21 07:09 Chenxr1999

The weights obtained by my own training and your pre-trained weights are loaded into the model, and the constrained convolution output is obtained after debugging. The obtained images are shown below, which are different. . image yours: image mine: image

I use constrained convolution to get a different picture after my own training from the weight file constrained convolution. May I ask what is the reason for this?

Sorry I didn't get it. Could you explain how you get the two pictures concretely?Are they from the same original image?

chchshshhh avatar Sep 10 '21 16:09 chchshshhh

I can not figure it out either. The constrained cnn is used to extract the noise pattern inconsistencies, the visualization of it is more likely used to show "well there do have sth strange comparing with context, and such clue may be further mined via CNN".

One possible reason is that, the output of constrained cnn is not restricted between 0 and 1(or 255), and the matplotlib tend to self-adjust color-map according to the distribution of input values and data type. So I suggust that you may use a sigmoid to observe the relative difference, but not the absolute values. (BTW, the color map of constrained cnn's visualization in CONSTRAINED R-CNN: A GENERAL IMAGE MANIPULATION DETECTION MODEL is also different from both of us.

It's just my conjecture, hope it helps.

dong03 avatar Sep 11 '21 08:09 dong03

Thank you for your answer, I will try it. Actually, I was thinking about whether the method of data enhancement will make the constrained convolution training produce a relatively large difference in the image, if possible, can you release the code of training data enhancement so that I can have a try, finally I really appreciate the patience to answer these questions

I can not figure it out either. The constrained cnn is used to extract the noise pattern inconsistencies, the visualization of it is more likely used to show "well there do have sth strange comparing with context, and such clue may be further mined via CNN".

One possible reason is that, the output of constrained cnn is not restricted between 0 and 1(or 255), and the matplotlib tend to self-adjust color-map according to the distribution of input values and data type. So I suggust that you may use a sigmoid to observe the relative difference, but not the absolute values. (BTW, the color map of constrained cnn's visualization in CONSTRAINED R-CNN: A GENERAL IMAGE MANIPULATION DETECTION MODEL is also different from both of us.

It's just my conjecture, hope it helps.

chchshshhh avatar Sep 11 '21 15:09 chchshshhh