Add `FromLinear` and `IntoLinear` lookup table creation to build script
This PR adds the creation of float ↔ uint conversion lookup tables for FromLinear and IntoLinear to the crate's build script. As a result, this crate no longer has a dependency on fast-srgb8 (I have confirmed that the lookup table used by fast-srgb8 is identical to the one generated by the build script).
Along with this, I have added the features: float_lut (for u8 to float conversions), float_lut16 (for u8to float and u16 to float conversions), fast_uint_lut (for fast f32 to u8 conversions), and fast_uint_lut16 (for fast f32 to u8 and f32 to u16 conversions). Of these, I have added float_lut and fast_uint_lut16 as the default features. float_lut must be default to not cause a breaking change for Srgb and RecOetf, whose lookup tables were replaced by the build script. I included fast_uint_lut16 as a default feature since its largest generated table contains only 1152 u64s, although I would understand wanting to replace it with fast_uint_lut as the default since that only generates tables of less than 200 u32s.
Building the crate seems to take considerably longer now due to the new code. I added some statements to the main.rs file in the build script that might prevent the crate being built unnecessarily often (although I doubt it since every time I run a test it seems to rerun the build script even though I haven't changed anything). If there are improvements to be made, please let me know and I'll implement them.
Also, does removing a dependency qualify as a breaking change? If so, I can add back fast-srgb8 so that it can be removed at the next major update.
CodSpeed Performance Report
Merging #416 will degrade performances by 24.6%
Comparing Kirk-Fox:lookup (544e40f) with master (234309c)
Summary
❌ 2 regressions
✅ 45 untouched benchmarks
:warning: Please fix the performance issues or acknowledge them on CodSpeed.
Benchmarks breakdown
| Benchmark | master |
Kirk-Fox:lookup |
Change | |
|---|---|---|---|---|
| ❌ | matrix_inverse |
352.5 ns | 467.5 ns | -24.6% |
| ❌ | linsrgb_f64 to rgb_u8 |
4.7 µs | 5.3 µs | -11.49% |
I'm not sure I understand what's causing the tests to fail at this point.
Interesting. The longer build times are worrying, but there may be optimization opportunities. I'm still having that cold I mentioned, so I'll have a proper look when I'm feeling better. A few quick things for now:
- I suppose it doesn't make sense to depend on
fast-srgb8anymore and that's not breaking, but its replacement has to be available unconditionally. It wasn't optional before. - The interpolation code doesn't need to be completely generated if it's largely the same every time.
- The tests fail because
IntoLinearis included unconditionally, but unused when some features are disabled. Expand the last command here. - For the cargo features in general, I would probably slice it the other way and have one (or two) per gamma function. That reduces the code size impact for each feature.
Alright, thanks for the direction. I'll fix the issue with IntoLinear and FromLinear being included unconditionally when I get the chance. How should I handle the interpolation code instead? I think it is useful to have this general purpose algorithm somewhere in the codebase, so how might I reconcile that with not fully running the interpolation code each time?
I changed around the features as you mentioned and I think I was also able to fix the issue with the build script running more often than it needs to. It should now, hopefully, only rerun if any of the files in the build folder are changed or if any features are changed. I also made it so that the code for Srgb isn't conditional so that it doesn't constitute a breaking change by removing fast-srgb8.
Currently, the conditional compilation for the 16-bit lookup table methods/structs are controlled by repeated uses of #[cfg(feature = "prophoto_lut")]. Ideally, I would have written it as #[cfg(any(feature = "prophoto_lut"))], so that it's clear this should be compiled if any 16-bit lookup table feature is enabled (if more are added later), but this leads to a warning for redundancy. Is there a more idiomatic way of doing that?
Additionally, even though the build script runs less often now, it is still quite slow and seems to drastically increase the time the PR tests take, so, as mentioned before, do you have a suggestion on how I can reconcile including the float to uint lookup table generation code somewhere in the codebase while not fully rerunning it each time?
I may be starting to realize a way to optimize the build script, but it's going to take some further research. I'll hopefully be pushing a new commit within the week that will at least somewhat improve the build time.
I went ahead and optimized the table building to use integration instead of summation since these are all exponential or linear functions. I also removed the tests for errors and just put in the values for the number of mantissa bits used in the index since they do not seem to change (3 for u8s and 7 for u16s).
Would it also be a good idea to generate the u8 to f32 tables separately? I'm not sure how much of a performance cost it is to convert from f64 to f32
I'm porting over the named colors in #417. I have also modernized it and made use of quote for nicer syntax.
I'll start working on consolidating those into singular library functions and moving things to the codegen crate. I'll also try to document the code a bit more so it's clearer what's going on
Thank you and sorry for the extra work. I do think this direction will be better. Now when the cost of generating the code is paid for in advance, we could change the cargo feature setup to something simpler (sorry again). I would suggest something like this:
u8to/from float - always included.u16to/from float - behind an opt-ingamma_lut_u16feature.
Simple as that. This avoids inflating the test set too much and relies on the compiler's ability to disregard constants it doesn't use. The feature is more there so it's possible to keep the binary size slim. A u16 to f32 table is 256 kilobytes, while a u8 to f32 table is "only" one kilobyte, so it matters more if a u16 table is included by accident. I hope that makes sense...
As for documentation, anything that explains the reasoning (especially where it diverges from the original) will help future us or other people who haven't been part of the process. Like with other things, I don't want this to turn into a black box later. I would also like to have references to the original implementation(s). That's key for tracing them back to the original reasoning and having something to compare to if there refactoring is required or if there are any issues. I know it's perhaps not the most fun thing to do, but it's even less fun to not have them later. So thanks in advance for taking some time to explain it. :pray:
A side note with the new codegen crate; let me know if you run into any rough edges. One thing I have noticed with quote is that it's usually best to keep each invocation relatively small. Rust-analyzer and/or clippy seemed to struggle a bit with very large blocks. It's still generally better than strings IMO.
Sorry that I've been less active. The school semester started and is kicking my butt. Thank you for the extra comments and I'll try to address them soon
No worries, I know how it can be. 😬 Thanks for the update, and let me know if it turns out to be too stressful to find time. Good luck with the studies and don't forget to sleep!
Hi, how's it going? I just wanted to check in and also let you know that there are some fixes in master that you will need to (preferrably) rebase in if you get back to this. Clippy will complain about unrelated lifetime annotations otherwise.
How does it look regarding getting a moment for this PR? Would it help if we scope it down a bit? The most important change from my perspective is to have the rationale for the algorithm changes in your words. The rest are things I can take care of.
Yeah, scoping it down would help. I also am trying to type up a short document describing how I arrived at the formulae that I did, since it's a bit too in-depth to be written in comments in the code. I'll still include a basic explanation of what's going on in the documentation, though. Does that sound alright?
That sounds great, thank you! I don't want you to feel like you are stuck with this, so the basic explanation could be good enough on its own if you want to keep it short. The point is to answer why this change was made (performance?) and what makes it equivalent to the original (referring to or showing an equation, algorithm, etc.). That usually works as reference for refactoring and bug fixing. Details for extra clarity are still appreciated, but don't sweat it.
When you feel like this is done, I would be grateful if you could also squash the commits into a single commit, like before. That should be it unless you want to address any of the other comments.
Hello again! I hope things are going well. I just want to check in and see if there's a chance that you will have time to at least squash and rebase this change anytime soon. Skip the documentation if it's too much. That can always be improved later if you happen to have time.
I would love to have it in the next release, which I'll put together sometime in the beginning of the next year. I may look into options to merge this anyway, if you didn't have time for it before, while of course preserving its connection to you for proper credits. Only to move it forwards. No hard feelings involved. That's not happening during the coming few weeks, though.
Okay, I did a lot to hopefully better organize the code and make it more readable/understandable. I still hope to make a full write-up at some point in the future, but it's pretty involved. I didn't quite understand what you meant regarding hand-writing certain functions, so I didn't change anything on that front.
I also added the u8 to f32 tables as you suggested. I think something might be wrong with the benchmarking tool, so I hope to see how that's affected things after that's back up and running. If you decide we should also include u16 to f32 tables, let me know and I will add them.
Let me know if you have any other comments. Once it looks good I can squash the commits into one again.
Thank you, this is great! I should upgrade the benchmark action, like you noticed... I'll let you know when it should work again.
I still hope to make a full write-up at some point in the future, but it's pretty involved.
I'm absolutely satisfied with the current comments, but feel free to add to them later if you want to and have the time. This is voluntary work, after all, and things like your studies are more important.
I didn't quite understand what you meant regarding hand-writing certain functions, so I didn't change anything on that front.
It's fine, I just think they would be better to have in the "regular" source files, since they don't contain any generated values (other than the constant names), and only have the table constants in the generated code. Just to keep the types and their implementations together when possible. It's the kind of nit picks I can take care of later, so feel free to skip it.
If you decide we should also include u16 to f32 tables, let me know and I will add them.
We can hold off on them for now, since it's not a regression if they are a bit slower. I would like to check how they affect the package size before deciding. I realized that large tables may eventually become an issue with the max package size.
The benchmark should work now, after rebasing to bring in the latest test config. :slightly_smiling_face:
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 83.11%. Comparing base (
234309c) to head (b0aec2f). Report is 2 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #416 +/- ##
==========================================
+ Coverage 82.63% 83.11% +0.47%
==========================================
Files 131 133 +2
Lines 20485 20622 +137
Branches 20485 20622 +137
==========================================
+ Hits 16927 17139 +212
+ Misses 3378 3343 -35
+ Partials 180 140 -40
| Components | Coverage Δ | |
|---|---|---|
| palette | 83.18% <100.00%> (+0.50%) |
:arrow_up: |
| palette_derive | 81.98% <ø> (ø) |
Thank you and sorry for not noticing that you had already squashed it before. Github isn't always super clear in that regard.
I had a look at the benchmark and it seems like the regression for f64 to u8 is pretty consistent. I'm not sure what's causing it, but I'll accept it as a future improvement opportunity. There's also a regression for f32 that's just below the threshold for appearing in the table: https://codspeed.io/Ogeon/palette/branches/Kirk-Fox%3Alookup. It could be that some slight difference prevents an optimization from kicking in. It's sometimes hard to know with tiny benchmarks like these.
But let's wrap this up. Thank you so much for your time and help! :star: