Image-Classification-Using-EfficientNets
Image-Classification-Using-EfficientNets copied to clipboard
Images become black when fit grayscaled images.
On Cusom dataset training notebook this line:
images = np.array(images)
images = images.astype('float32') / 255.0
converts images into black
If input images are grayscaled.
It's my understanding that the following line:
image = image.astype('float32')/255
Will convert the pixel values (ranging from 0-255) to a value between 0-1. This would then produce an image which appears all-black, due to the extremely dark pixel values, as a pixel value of 0 is taken to be black, and 255 is taken to be white.
The understanding in your explanation is mostly correct, but there is a small mistake in the conversion. The line image = image.astype('float32')/256 should actually be image = image.astype('float32')/255.0.
In this line, the image is being converted to a floating-point data type (float32) to allow for decimal values. Then, the pixel values are divided by 255.0, which is the maximum value that a pixel can have in an 8-bit grayscale image. Dividing by 256 would result in values slightly larger than 1.
By dividing by 255.0, the pixel values are normalized to the range of 0-1. A pixel value of 0 becomes 0.0 (black), and a pixel value of 255 becomes 1.0 (white). Any intermediate pixel value is scaled proportionally to its original range.
Therefore, the resulting image would not be all-black but rather a representation of the original image with pixel values scaled down to the 0-1 range. Dark pixels would have values close to 0, and brighter pixels would have values closer to 1.