image-processing
                                
                                 image-processing copied to clipboard
                                
                                    image-processing copied to clipboard
                            
                            
                            
                        Gaussian Blur not shown to be effective against noise
We just tested episode 6 and were kind of puzzled by the key points: "Applying a low-pass blurring filter smooths edges and removes noise from an image" While I agree that we have seen the blurring of edges, I think some example to show that blurring is effective against small scale noise and hence can be used for denoising would be nice. Why don't we give the example image gaussian-original.png some random noise right from the start or let the learners apply some random noise via numpy.random ?
This is a great idea, and IMO will more clearly illustrate the motivation for including blurring in an image processing pipeline.
Maybe the following can help, @CaptainSifff ?
One of these days I used parts of this lesson in an overview about image processing, and I used 3D views of the petri-dish image to show the effect of filtering.
- the image used:
import imageio.v3 as iio
import skimage.color
image = iio.imread('data/colonies-01.tif')
image_gray = skimage.color.rgb2gray(image)
fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')
- 3D view of (original) image:
from skimage.util import img_as_ubyte
size = min(image_gray.shape)
x = np.arange(0, size)
y = np.arange(0, size)
X,Y = np.meshgrid(x,y)
img = image_gray[:size,:size]
img = img_as_ubyte(img)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
surf = ax.plot_surface(X,Y,img, cmap='viridis')
ax.view_init(elev=215, azim=-60)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('L')
- 3D view of blurred image:
from skimage.util import img_as_ubyte
from skimage.filters import gaussian
image_blur = gaussian(image_gray, sigma=3)
size = min(image_blur.shape)
x = np.arange(0, size)
y = np.arange(0, size)
X,Y = np.meshgrid(x,y)
img = image_blur[:size,:size]
img = img_as_ubyte(img)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
surf = ax.plot_surface(X,Y,img, cmap='viridis')
ax.view_init(elev=215, azim=-60)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('L')
Thanks for the engagement @chbrandt. Looks like the perti dish image has the kind of noise @CaptainSifff mentions. I wonder whether the visualisation might be better in 2D, as that's what we use in most of the lesson? The 3D one looks nice, though and I can clearly see the denoising effect.
Great idea! If we stick to 2D we should find a way to make the small scale noise visible to the audience. So either by pixel-peeping, or maybe by some type of gradient filter?
Then, I think the best way of showing it would be through a transversal cut, showing the intensity of the pixels, say, along Y=150.
a transversal cut, showing the intensity of the pixels, say, along Y=150.
I like this idea, and would definitely welcome a pull request to add it. However, I think the 3D images are very effective at illustrating the denoising effect and I would propose to accompany this 2D 'slice' with the 3D images - only without the code that was used to generate them.
My rationale for omitting the 3D plot code is to avoid increasing the cognitive load of the episode/lesson (meshgrid, img_as_ubyte, plot_surface, and view_init are all new functions/methods that learners may have questions about). On the other hand, it would be a shame to deprive interested learners of an opportunity to learn more about how they can create cool 3D plots with matplotlib... 😆
@chbrandt would you be willing to create a public gist of the code you used to generate those 3D plots, and include a link to that gist in the captions to the 3D images? e.g.
. 
Image credit: [Carlos H Brandt](https://github.com/chbrandt/).
](episodes/fig/petri_before_blurring.png){
alt='3D surface plot showing pixel intensities across the whole example Petri dish image before blurring'
}
and
. 
Image credit: [Carlos H Brandt](https://github.com/chbrandt/).
](episodes/fig/petri_before_blurring.png){
alt='3D surface plot illustrating the smoothing effect on pixel intensities across the whole example Petri dish image after blurring'
}
[Edit: fixed the alternative text description for the second 3D plot.]
Sure @tobyhodges , I can do that. I think you found the right balance.
Before I create the gist and push a PR, let me dump the code for the alternatives discussed (ie, the transversal cut and pixel-peeping):
Transversal cut/slice
Where are we slicing?
import matplotlib.pyplot as plt
import imageio.v3 as iio
import skimage.color
image = iio.imread('data/colonies-01.tif')
image_gray = skimage.color.rgb2gray(image)
xmin, xmax = (0, image_gray.shape[1])
ymin = ymax = 150
fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')
ax.plot([xmin,xmax], [ymin,ymax], color='red')
How does it (ie, the intensity of those pixels) look like?
image_gray_pixels_slice = image_gray[150,:]
image_gray_pixels_slice = img_as_ubyte(image_gray_pixels_slice)
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(image_gray_pixels_slice, color='red')
ax.set_ylim(255, 0)
ax.set_ylabel('L')
ax.set_xlabel('X')
Equivalently, the same pixels/slice from the smoothed image:
image_blur_pixels_slice = image_blur[150,:]
image_blur_pixels_slice = img_as_ubyte(image_blur_pixels_slice)
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(image_blur_pixels_slice, 'red')
ax.set_ylim(255, 0)
ax.set_ylabel('L')
ax.set_xlabel('X')
ax.set_title('Slice "Y=150" from Blur image')
Pixel-peeping
Where are we zooming?
import matplotlib.patches as patches
fig,ax = plt.subplots()
ax.imshow(image_gray, cmap='gray')
xini = 10
yini = 130
dx = dy = 40
rect = patches.Rectangle((xini,yini), dx, dy, edgecolor='red', facecolor='none')
ax.add_patch(rect)
How is it before smoothing (original image)?
fig,ax = plt.subplots()
ax.imshow(image_gray[yini:yini+dy, xini:xini+dx], cmap='gray')
How does it look like after smoothing?
fig,ax = plt.subplots()
ax.imshow(image_blur[yini:yini+dy, xini:xini+dx], cmap='gray')
Excellent, thanks @chbrandt. Both seem very effective for illustrating the blurring effect. My vote would be to use transversal cut because I think the code would be marginally easier for learners to understand.
Does anyone else have a strong opinion one way or the other? @datacarpentry/image-processing-maintainers-workbench
When you prepare the PR, please add a comment to the code block to explain the use of img_as_uint
This is great work. Is the idea to extend the lesson by having learners write the code to generate the cuts, or to only use the cuts as visualisations?
I think it's a bit of both, @bobturneruk - include the 3D plots as illustration only, but perhaps include the code for at least one of the methods above, as extra practice with visualising intensities. Learners should be pretty familiar with that plotting by this point in the lesson, after the previous episode about creating histograms.
It would theoretically add time to the lesson, but in practice I suspect that increase to teaching time will be limited because a good illustrative example will reduce the time spent on questions and clarification.
As something of an aside (sorry), I think some of the code used to generate figures was taken out of the main branch when the repo was upgraded to workbench. It may be worth having a separate issue on this, dealing with where such code should sit in the current repo structure e.g. instructor notes. This would be relevant to any code added resolving this issue not to be run by learners.
Agreed, and I've created #286 to track that as a separate discussion
@tobyhodges came back to this. A first version is in #292 .