odiff
odiff copied to clipboard
Normalize alpha before blending for correct RGB values and distance
In the base branch, alpha values passed to the blend
function are not normalized to 0.0 to 1.0. So, the blending can generate invalid RGB values outside the range of 0 to 255, especially negative RGB values. As a consequence, the value range of color distance exceeds the maxYIQPossibleDelta = 35215
. So, I've added these assertions for RGB and distance, which fail in existing test cases.
I've added normalization to the alpha values. However, we still need to update existing test cases that count different pixels with new correct assertions.
I am not really sure that this is algorithmically correct. Could you please provide the failing case, ideally as a test that fails without your changes
Thanks for your response. I've added tests for blendSemiTransparentColor in the new commit. It passes in this branch, but fails on main with the following results.
> npx esy test blend -f blend
Running 1 test suite matching \blend\i
FAIL CORE: blendSemiTransparentColor
• CORE: blendSemiTransparentColor › blend 255. alpha
expect.float(received).toBeCloseTo(expected, precision)
Precision: 2-digit
Expected: 1.
Received: 255.
• CORE: blendSemiTransparentColor › blend 5. alpha
expect.float(received).toBeCloseTo(expected, precision)
Precision: 2-digit
Expected: 250.
Received: -1020.
99 ┆
100 ┆ test "blend 5. alpha" (fun { expect; _ } ->
101 ┆ let (r, g, b, a) = Odiff.ColorDelta.blendSemiTransparentColor(0., 128., 255., 5.) in
102 ┆ (expect.float r).toBeCloseTo 250.;
103 ┆ (expect.float g).toBeCloseTo 252.51;
104 ┆ (expect.float b).toBeCloseTo 255.;
105 ┆ (expect.float a).toBeCloseTo 0.02);
Raised by primitive operation at OdiffTests__Test_Core.(fun) in ./test/Test_Core.ml:102:6
Called from Stdlib__List.map in ./list.ml:92:20
Called from Stdlib__List.map in ./list.ml:92:32
• CORE: blendSemiTransparentColor › blend 51. alpha
expect.float(received).toBeCloseTo(expected, precision)
Precision: 2-digit
Expected: 204.
Received: -12750.
106 ┆
107 ┆ test "blend 51. alpha" (fun { expect; _ } ->
108 ┆ let (r, g, b, a) = Odiff.ColorDelta.blendSemiTransparentColor(0., 128., 255., 51.) in
109 ┆ (expect.float r).toBeCloseTo 204.;
110 ┆ (expect.float g).toBeCloseTo 229.6;
111 ┆ (expect.float b).toBeCloseTo 255.;
112 ┆ (expect.float a).toBeCloseTo 0.2);
Raised by primitive operation at OdiffTests__Test_Core.(fun) in ./test/Test_Core.ml:109:6
Called from Stdlib__List.map in ./list.ml:92:20
Called from Stdlib__List.map in ./list.ml:92:32
• CORE: blendSemiTransparentColor › blend 128. alpha
expect.float(received).toBeCloseTo(expected, precision)
Precision: 2-digit
Expected: 127.
Received: -32385.
113 ┆
114 ┆ test "blend 128. alpha" (fun { expect; _ } ->
115 ┆ let (r, g, b, a) = Odiff.ColorDelta.blendSemiTransparentColor(0., 128., 255., 128.) in
116 ┆ (expect.float r).toBeCloseTo 127.;
117 ┆ (expect.float g).toBeCloseTo 191.25;
118 ┆ (expect.float b).toBeCloseTo 255.;
119 ┆ (expect.float a).toBeCloseTo 0.5))
Raised by primitive operation at OdiffTests__Test_Core.(fun) in ./test/Test_Core.ml:116:6
Called from Stdlib__List.map in ./list.ml:92:20
Called from Stdlib__List.map in ./list.ml:92:32
Test Suites: 1 failed, 0 passed, 1 total
Tests: 4 failed, 1 passed, 5 total
Time: 0.003s
I checked your implementation and I think I need more information why and how it changes the behavior of the odiff itself. All the tests are failing currently and I need to unsderstand why. Do you say that basically all the RGB values are wrong in the current implementation? If so, I am not sure we should change it.
Could you please provide where current implementation of odiff provide wrong result
Since, it should fail to calc the correct square for each rgb value, and therefore, the correct distance, there could two problems, any color diffs are not ignored even if threshold = 1.0, and, in some cases, a pixel is treated as a not change while more less-distant one is treated as a change. I have a test cases for each, and I want to write down and push them later.
I spent some time diving into this and verified that the changes are 100% correct. Thank you for finding and fixing this out.
Thanks for checking that and updating existing tests! sorry for I haven't had time to write additional tests I said yet, so I haven't started on that. Do you still think it's better to have more unit tests at this point? I believe it may be beneficial even now for extra test-time assertions. Also, I don't mind closing this as resolved at any time.
After a bit of investigation I found deeper issue with alpha blending. It seems like the formula I used simply non applicable here.
About tests -- yeah probably we need more high level test with different images. I added a few corner cases more