MONAI icon indicating copy to clipboard operation
MONAI copied to clipboard

Update `SobelGradients` to include `direction`

Open bhashemian opened this issue 3 years ago • 11 comments

Fixes #5188

Description

This PR updates SobelGradients and SobelGradientsd to include gradient direction. It also adds unit tests.

Types of changes

  • [x] Non-breaking change (fix or new feature that would not break existing functionality).
  • [x] New tests added to cover the changes.
  • [x] Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • [x] Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • [x] In-line docstrings updated.

bhashemian avatar Sep 21 '22 15:09 bhashemian

Hi @drbeh, I have just added unittest in CalcualteInstanceSegmentationMap which used SobelGradients and found some different effects between cv2 and SobelGradients. The difference between cv2 and SobelGradients also cause CalcualteInstanceSegmentationMap can't get correct instance segmentation results. So I just used cv2 now. I will withdraw my approval of this PR and please check it. Here is my test code.

import cv2
from monai.transforms.post.array import SobelGradients

image = np.zeros([20,20])
image[5:15, 5:15] = 1

filter = SobelGradients(kernel_size=5)
mo_ret = filter(image[None]).squeeze()
cv_ret_0 = cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=5)
cv_ret_1 = cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=5)
mo_ret = 1 - (mo_ret - np.min(mo_ret)) / (np.max(mo_ret) - np.min(mo_ret))
cv_ret_0 = 1 - (cv_ret_0 - np.min(cv_ret_0)) / (np.max(cv_ret_0) - np.min(cv_ret_0))
cv_ret_1 = 1 - (cv_ret_1 - np.min(cv_ret_1)) / (np.max(cv_ret_1) - np.min(cv_ret_1))

fig, ax = plt.subplots(1,5,figsize = (50,10))
ax[0].imshow(mo_ret[0], cmap='gray')
ax[1].imshow(mo_ret[1], cmap='gray')
ax[2].imshow(cv_ret_0, cmap='gray')
ax[3].imshow(cv_ret_1, cmap='gray')
ax[4].imshow(mo_ret[0] - cv_ret_0, cmap='gray')
Screen Shot 2022-09-24 at 17 07 00

Thanks!

KumoLiu avatar Sep 24 '22 09:09 KumoLiu

I have just added unittest in CalcualteInstanceSegmentationMap which used SobelGradients and found some different effects between cv2 and SobelGradients

@drbeh and I were discussing this in another thread, opencv uses the separable implementation cv2.getDerivKernels(1, 0, 5), that's slightly different from what we have here. but we haven't looked into the actual segmentation outcome differences.

wyli avatar Sep 24 '22 16:09 wyli

Just FYI, I showed the instance segmentation result from CalcualteInstanceSegmentationMap. The results are calculated with all the same components except for Sobel gradient. cv2 - result
Screen Shot 2022-09-26 at 10 13 23 MONAI - result Screen Shot 2022-09-26 at 10 17 40 Thanks!

KumoLiu avatar Sep 26 '22 02:09 KumoLiu

Sorry - inadvertantly closed the PR

JHancox avatar Sep 26 '22 10:09 JHancox

What I was trying to do was to comment that I don't recall seeing that same effect (i.e. the instance segmentation difference), so just wondering whether it really is just the sobel that is making the difference. Will look at my original code to see if I can reproduce.

JHancox avatar Sep 26 '22 10:09 JHancox

Okay, so my sobel implementation, which used correlate2d from scipy.signal, predated the Monai Sobel implementation, which would explain the difference.

JHancox avatar Sep 26 '22 12:09 JHancox

Hi @KumoLiu @JHancox,

As @wyli mentioned, Sobel gradients in OpenCV for higher dimensions is implemented differently but they should match exactly for a kernel size of 3x3. The main reason for our implementation is to match TIAToolbox and PathML implementation for hovernet.

@KumoLiu can you check the gradient differences with a 3x3 kernel? Also can you create the instance segmentation result from CalcualteInstanceSegmentationMap with 3x3 kernel both with opencv and monai? Thanks

bhashemian avatar Sep 26 '22 13:09 bhashemian

Hi @drbeh, I checked the gradient differences with a 3x3 kernel. For the faked data I generated, it seems there is no difference: Screen Shot 2022-09-27 at 10 11 10 For the test data @JHancox shared with me, it seems cv2 had sharper edges: Screen Shot 2022-09-27 at 10 12 07 For instance segmentation results, left is the result generated by cv2, right is generated bySobelGradients which generates a completely blank plot. It may be due to the threshold set here. https://github.com/Project-MONAI/tutorials/blob/156a459e95b12f0650b2cc57994f7bb8080b6cff/pathology/hovernet_nuclei_identification/post_processing.py#L142 Screen Shot 2022-09-27 at 10 13 39

So I also tried different thresholds, the results are listed below. Screen Shot 2022-09-27 at 10 23 33 Thanks!

KumoLiu avatar Sep 27 '22 02:09 KumoLiu

I'm sure you know but, just to be sure, HoverNet uses a sobel kernel size of 5x5 :)

JHancox avatar Sep 27 '22 09:09 JHancox

Hi @KumoLiu, thank you very much for comparing the results. It seems that it is related to threshold but in fact it shouldn't. The only thing that I can think of is that maybe they are using different normalization factor for the kernel, so the output is shifted with a constant but I believe there is a normalization process happening afterwards to bring them between 0 and 1, right?

bhashemian avatar Sep 27 '22 12:09 bhashemian

Hi @KumoLiu, thank you very much for comparing the results. It seems that it is related to threshold but in fact it shouldn't. The only thing that I can think of is that maybe they are using different normalization factor for the kernel, so the output is shifted with a constant but I believe there is a normalization process happening afterwards to bring them between 0 and 1, right?

Yes, the results I plotted here had been normalized with MinMax.

KumoLiu avatar Sep 27 '22 12:09 KumoLiu

Hi @wyli, Based on our previous conversation, I have updated Sobel gradients to use separable kernels of any size and can be applied to the images with any dimension and along any given axis. Would be great if you can review. Thanks

bhashemian avatar Oct 26 '22 22:10 bhashemian

Hi @KumoLiu, could you please run your experiment for Sobel gradients again to compare it to OpenCV implementation? thanks

bhashemian avatar Oct 26 '22 22:10 bhashemian

Hi @drbeh, I have just tested it with kernel_size=3 on fake data and data @JHancox shared with me. It seems we get the exact opposite of the value of OpenCV. Screen Shot 2022-10-27 at 17 07 59 Screen Shot 2022-10-27 at 17 14 00

Here is my test code.

# fake data
image = np.zeros([20,20])
image[5:15, 5:15] = 1
# real data
image = hv_dir[0]

filter = SobelGradients(kernel_size=3)
mo_ret = filter(image[None]).squeeze()

cv_ret_0 = cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=3)
cv_ret_1 = cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=3)
mo_ret_0 = 1 - (mo_ret[0] - np.min(mo_ret[0])) / (np.max(mo_ret[0]) - np.min(mo_ret[0]))
mo_ret_1 = 1 - (mo_ret[1] - np.min(mo_ret[1])) / (np.max(mo_ret[1]) - np.min(mo_ret[1]))
cv_ret_0 = 1 - (cv_ret_0 - np.min(cv_ret_0)) / (np.max(cv_ret_0) - np.min(cv_ret_0))
cv_ret_1 = 1 - (cv_ret_1 - np.min(cv_ret_1)) / (np.max(cv_ret_1) - np.min(cv_ret_1))

fig, ax = plt.subplots(1,5,figsize = (50,10))
ax[0].imshow(mo_ret_0, cmap='gray')
ax[1].imshow(mo_ret_1, cmap='gray')
ax[2].imshow(cv_ret_0, cmap='gray')
ax[3].imshow(cv_ret_1, cmap='gray')
ax[4].imshow(mo_ret_0 - cv_ret_0, cmap='gray')
ax[0].set_title('sobelx in MONAI', fontsize=25)
ax[1].set_title('sobely in MONAI', fontsize=25)
ax[2].set_title('sobelx in CV2', fontsize=25)
ax[3].set_title('sobely in CV2', fontsize=25)
ax[4].set_title('diff mapx', fontsize=25)

If I change these two lines to without 1-

mo_ret_0 = 1 - (mo_ret[0] - np.min(mo_ret[0])) / (np.max(mo_ret[0]) - np.min(mo_ret[0]))
mo_ret_1 = 1 - (mo_ret[1] - np.min(mo_ret[1])) / (np.max(mo_ret[1]) - np.min(mo_ret[1]))

changed to

mo_ret_0 = (mo_ret[0] - np.min(mo_ret[0])) / (np.max(mo_ret[0]) - np.min(mo_ret[0]))
mo_ret_1 = (mo_ret[1] - np.min(mo_ret[1])) / (np.max(mo_ret[1]) - np.min(mo_ret[1]))

Results seem more reasonable. Screen Shot 2022-10-27 at 17 17 09

Screen Shot 2022-10-27 at 17 16 48

Thanks!

KumoLiu avatar Oct 27 '22 09:10 KumoLiu

Hi @drbeh, I have just tested it with kernel_size=3 on fake data and data @JHancox shared with me. It seems we get the exact opposite of the value of OpenCV. Screen Shot 2022-10-27 at 17 07 59 Screen Shot 2022-10-27 at 17 14 00

Here is my test code.

# fake data
image = np.zeros([20,20])
image[5:15, 5:15] = 1
# real data
image = hv_dir[0]

filter = SobelGradients(kernel_size=3)
mo_ret = filter(image[None]).squeeze()

cv_ret_0 = cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=3)
cv_ret_1 = cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=3)
mo_ret_0 = 1 - (mo_ret[0] - np.min(mo_ret[0])) / (np.max(mo_ret[0]) - np.min(mo_ret[0]))
mo_ret_1 = 1 - (mo_ret[1] - np.min(mo_ret[1])) / (np.max(mo_ret[1]) - np.min(mo_ret[1]))
cv_ret_0 = 1 - (cv_ret_0 - np.min(cv_ret_0)) / (np.max(cv_ret_0) - np.min(cv_ret_0))
cv_ret_1 = 1 - (cv_ret_1 - np.min(cv_ret_1)) / (np.max(cv_ret_1) - np.min(cv_ret_1))

fig, ax = plt.subplots(1,5,figsize = (50,10))
ax[0].imshow(mo_ret_0, cmap='gray')
ax[1].imshow(mo_ret_1, cmap='gray')
ax[2].imshow(cv_ret_0, cmap='gray')
ax[3].imshow(cv_ret_1, cmap='gray')
ax[4].imshow(mo_ret_0 - cv_ret_0, cmap='gray')
ax[0].set_title('sobelx in MONAI', fontsize=25)
ax[1].set_title('sobely in MONAI', fontsize=25)
ax[2].set_title('sobelx in CV2', fontsize=25)
ax[3].set_title('sobely in CV2', fontsize=25)
ax[4].set_title('diff mapx', fontsize=25)

If I change these two lines to without 1-

mo_ret_0 = 1 - (mo_ret[0] - np.min(mo_ret[0])) / (np.max(mo_ret[0]) - np.min(mo_ret[0]))
mo_ret_1 = 1 - (mo_ret[1] - np.min(mo_ret[1])) / (np.max(mo_ret[1]) - np.min(mo_ret[1]))

changed to

mo_ret_0 = (mo_ret[0] - np.min(mo_ret[0])) / (np.max(mo_ret[0]) - np.min(mo_ret[0]))
mo_ret_1 = (mo_ret[1] - np.min(mo_ret[1])) / (np.max(mo_ret[1]) - np.min(mo_ret[1]))

Results seem more reasonable. Screen Shot 2022-10-27 at 17 17 09

Screen Shot 2022-10-27 at 17 16 48

this looks like the difference between convolution and cross-correlation... perhaps the kernel should be flipped to make it consistent with the literature cc @kbressem who is working on https://github.com/Project-MONAI/MONAI/pull/5317

wyli avatar Oct 27 '22 13:10 wyli

this looks like the difference between convolution and cross-correlation... perhaps the kernel should be flipped to make it consistent with the literature cc @kbressem who is working on #5317

When I originally ported this, I did find that using cross-correlation was the way to get the same result

JHancox avatar Oct 27 '22 13:10 JHancox

@KumoLiu @wyli that's right, I flipped the differentiator kernel.

bhashemian avatar Oct 27 '22 13:10 bhashemian

Actually what we have in deep learning as "convolution" is really "cross-correlation", which is similar except that the kernel is not flipped.

bhashemian avatar Oct 27 '22 13:10 bhashemian

@wyli @Nic-Ma @KumoLiu any other comments here? Thanks

bhashemian avatar Oct 28 '22 12:10 bhashemian

/black

Nic-Ma avatar Oct 31 '22 11:10 Nic-Ma

/build

Nic-Ma avatar Oct 31 '22 11:10 Nic-Ma

/build

wyli avatar Oct 31 '22 12:10 wyli