Image-Classification-Using-EfficientNets icon indicating copy to clipboard operation
Image-Classification-Using-EfficientNets copied to clipboard

Images become black when fit grayscaled images.

Open mzhadigerov opened this issue 3 years ago • 2 comments

On Cusom dataset training notebook this line:

images = np.array(images)

images = images.astype('float32') / 255.0

converts images into black If input images are grayscaled.

mzhadigerov avatar Nov 09 '21 23:11 mzhadigerov

It's my understanding that the following line:

image = image.astype('float32')/255

Will convert the pixel values (ranging from 0-255) to a value between 0-1. This would then produce an image which appears all-black, due to the extremely dark pixel values, as a pixel value of 0 is taken to be black, and 255 is taken to be white.

r-sinden avatar May 30 '23 15:05 r-sinden

The understanding in your explanation is mostly correct, but there is a small mistake in the conversion. The line image = image.astype('float32')/256 should actually be image = image.astype('float32')/255.0.

In this line, the image is being converted to a floating-point data type (float32) to allow for decimal values. Then, the pixel values are divided by 255.0, which is the maximum value that a pixel can have in an 8-bit grayscale image. Dividing by 256 would result in values slightly larger than 1.

By dividing by 255.0, the pixel values are normalized to the range of 0-1. A pixel value of 0 becomes 0.0 (black), and a pixel value of 255 becomes 1.0 (white). Any intermediate pixel value is scaled proportionally to its original range.

Therefore, the resulting image would not be all-black but rather a representation of the original image with pixel values scaled down to the 0-1 range. Dark pixels would have values close to 0, and brighter pixels would have values closer to 1.

AarohiSingla avatar May 31 '23 02:05 AarohiSingla