aces-core
aces-core copied to clipboard
Mismatch in precalculation of tone scale constant r_hit
Apparent mismatch in pre-calculation of r_hit constant for tone scale function.
CTL currently says:
const float r_hit_min = 128.; // scene-referred value "hitting the roof"
const float r_hit_max = 896.; // scene-referred value "hitting the roof"
// Calculate output constants
const float r_hit = r_hit_min + r_hit_max * (log(n/n_r)/log(10000./100.));
but the Blink says:
DRT_CAM_Kernel_daniele_r_hit_min 128
DRT_CAM_Kernel_daniele_r_hit_max 896
// pre-calculate Daniele Evo constants
daniele_r_hit = daniele_r_hit_min + (daniele_r_hit_max - daniele_r_hit_min) * (log(daniele_n / daniele_n_r) / log(10000.0f / 100.0f));
CTL should probably be changed to:
const float r_hit = r_hit_min + (r_hit_max-r_hit_min) * (log(n/n_r)/log(10000./100.));
Need to verify how much changing r_hit from 896 to 768 affects the values resulting through the system tone scale.
I would have expected to have seen more obvious errors with this value being incorrect, but suppose that the overall shape of the curve would still be correct and perhaps the difference in value caused only a small difference the output from the intended values, so that it wasn't as obvious as one might expect.
The original Daniele algo didn't include that subtraction. Maybe best just lower the r_hit_max to 768 and then it's identical. The code is correct, the constant is just wrong value.
The whole curve seems to change ever so slightly.
The Daniele version was just a Desmos plot. The min and max were just "hard coded" into equations, rather than being parameters.
If you use the subtraction formulation then r_hit_min and r_hit_max are the min and max values for r_hit, which makes more logical sense.
Need to verify how much changing r_hit from 896 to 768 affects the values resulting through the system tone scale.
It actually affects it far less than you might think. I added a slider going from 768 to 896 to this Desmos.
Thanks for verifying that - it is quite a small effect. But it could explain some of the minor discrepancies I've seen in the output, but haven't had a chance to run and evaluate the numbers yet to see how much the error improves when i use the correct value here. Thinking about it, I think most of my validation and testing was done using the SDR transform and so that would have had the least effect of all.
SDR would not just have "least effect". It would be entirely unaffected, because that is only affected by r_hit_min.
Fixed in this commit c69b0baeee40e052981a76ce63486e3fd9ee23cf