Change pixel size of "LA", "La", and "PA" from 4 to 2
Fixes #6503
hmm, everything passed when I ran it on my computer. I think the failure in test_image.py might be an endianness issue. I added a lot more modes to the test because they all passed for me, but I guess they still have issues on other systems.
It looks like the Valgrind errors are because the image is created with space for 4-byte pixels, but then only the first three are assigned to a value. Later, the pixel size is used to loop over the bytes, and since the last byte hadn't been assigned to anything, Valgrind complains about it. It's not actually a problem though since those bytes won't be used.
I think the easiest fix would be to zero-out those bytes when the image is allocated. Another solution would be to go through all of the places where the pixel size is checked and make sure it ignores that last byte if it's not being used. This would probably be faster since it would skip a lot of computation, but it would also complicate the code having to figure out the right size everywhere.
I'm not sure what to do about this error on the s390s machine:
__________________ TestImageTransform.test_nearest_resize[LA] __________________
self = <Tests.test_image_transform.TestImageTransform object at 0x40085c28f0>
mode = 'LA'
@pytest.mark.parametrize("mode", ("RGBA", "LA"))
def test_nearest_resize(self, mode):
def op(im, sz):
return im.resize(sz, Image.Resampling.NEAREST)
> self._test_nearest(op, mode)
Tests/test_image_transform.py:186:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Tests.test_image_transform.TestImageTransform object at 0x40085c28f0>
op = <function TestImageTransform.test_nearest_resize.<locals>.op at 0x40fae7dfc0>
mode = 'LA'
def _test_nearest(self, op, mode):
# create white image with half transparent,
# do op,
# the image should remain white with half transparent
transparent, opaque = {
"RGBA": ((255, 255, 255, 0), (255, 255, 255, 255)),
"LA": ((255, 0), (255, 255)),
}[mode]
im = Image.new(mode, (10, 10), transparent)
im2 = Image.new(mode, (5, 10), opaque)
im.paste(im2, (0, 0))
im = op(im, (40, 10))
colors = im.getcolors()
> assert colors == [
(20 * 10, opaque),
(20 * 10, transparent),
]
E assert [(200, (255, ..., (255, 255))] == [(200, (255, ...00, (255, 0))]
E At index 0 diff: (200, (255, 0)) != (200, (255, 255))
E Full diff:
E - [(200, (255, 255)), (200, (255, 0))]
E + [(200, (255, 0)), (200, (255, 255))]
Tests/test_image_transform.py:176: AssertionError
It's not actually returning a wrong result; it's just backwards. Probably a combination of the hash results being different now that it's two bytes instead of four, though still stored in an INT32.
Actually, it would probably work if I changed it to a set comparison instead of an array comparison, since all of the colors should be unique anyways.
Everything's passing! Except code coverage went down by 0.08%.
Is there anything else that needs to be done before this can be merged?
Closing as I believe #6547 is a better way to go, and this change will be much easier if that change is made.