Enhance render test output
Changes
- Add in option JSON option to allow users to format data as they see fit.
- Add Markdown option to allow preview in Markdown supported locations such as Github repos.
- Cleaned up HTML output to properly wrap and format + is responsive now (will fill if w<=0 specified)
- Fixed bad memory read (exception) if you specify the same language for 3 diffs.
- Add in "reduced" image option. The default size can be reduced to roughty 1/10 size if use original 512. (122 to 10 MB). and with 256 width to 3MB for after conversion to PDF.
- Uses base64 encoding into JPG and embeds into json struct which can be stored directly in Markdown or HTML. This also means less disk read / writes for diff image generation.
- Uses OpenCV + numpy if available for fast resize. PIL and OpenImage are much slower for this encoding case.
- JSON / Markdown storage is about 4mb from 160K when embedding encoded images but is now self-contained.
- Also replaced diff with OpenCV + numpy instead of PIL to avoid extra data copies.
- This allows for adding in "difference method" option which defaults to RMS but can now be set to "COLORMOMENT" to use OpenCV CM hash / compare.
Output
Color Moment Hash option output
JSON, Markdown, HTML output
HTML results responsive (fills) vs packed.
- Both scale properly on tablet and mobile now.
- Packed is default as default width (w) is 256.
Embedding Example
Markdown embedding. This can be stored on any github repo without the original files.
PDF file with reduced images. Straight HTML->PDF save. Test Results.pdf
This looks really compelling, @kwokcb, and I'm looking forward to trying this out in the render test suite!
I like the added flexibility in the new HTML generation, though when I run the render tests through our usual PDF-generation process, the rendered images seem to be smaller on the page, with an extra white border around them:
MaterialXRenderTests_11_22_2025_GitHub.pdf
Do you think it would be possible to adjust the default HTML generation to restore the original size of the images on the page, without the additional white border?
@jstone-lucasfilm , @ld-kerley : This is about as far as I'm going to take this change for now. The data structure built can hold whatever is desired but by adding 64 bit encoding it can be self contained and also produces much smaller output. I have switched to OpenCV since it's the fastest package I could find for this (without external dependencies).
(Jonathan, the formatting is as close to the original static tables as possible for html.).
Example run with small change to add hash compares. hash_results.pdf
I've been playing around with this locally and the -e flag is hugely valuable for honing in on the issues - its all too easy to miss a failure in a sea of success.
Also really liking the markdown output!
I did have to install the opencv python module to get the diff to work - the old PIL code doesn't seem to be any sort of fallback to generate the images - but honestly It feels a LOT faster to me using openCV.
It sounds like this work has evolved quite a bit since my last test run, just before the holiday, and I'm looking forward to giving the latest version a try!
I have no objection to fully switching from PIL to OpenCV, though I'm curious as to whether we ought to remove the remaining PIL import code from the script, so that we're not depending on two image processing libraries in a single script.
Ultimately, our goal is to use OpenImageIO for all of this logic, but we can consider that a future improvement, perhaps timed with the proposed support for Color Moment Hashing in OIIO.
I can remove the PIL code. It is the slowest and also due to type conversion is less accurate and I think won't work with Exr files.
Definitely the aim is for consolidating library usage. I was thinking the next step is to decouple analysis tools from here, like we have the ability to pass a verification tool like what is done for the generateShader.py script. The default could be set to OpenImageIO.
Analysis data can be computed without comparison this way. E.g hashes can be stored per run, and the similarity test done as needed.
The other query beside color moment support is base64 encoding support in OIIO to avoid data copies conversion via numpy. It may be best to keep these as separate tool options as I can see this as being a client side (web) action as well. TBD.
The other query beside color moment support is base64 encoding support in OIIO to avoid data copies conversion via numpy.
What does this mean?
The other query beside color moment support is base64 encoding support in OIIO to avoid data copies conversion via numpy.
What does this mean?
Sorry for the poor phrasing. Just noting that the logic to create base64-encoded inlined image data would need to visited if using OIIO.
PIL support removed. Only uses OpenCV and numpy.
I don't have much more time to spend on this so I've yanked out an "image utils" class which currently uses OpenCV. As it's available I've also added in a "difference method" option to allow ColorMoment hash / diffs. RMS is still the default diff method.
Thanks for all of these additional improvements, @kwokcb, and on my side I'd like to run our full GLSL/OSL render suite at reference quality, to see how the latest PDF compares with earlier versions.