Add a function to convert from/to a known external color space
Some applications have functionality that generates synthetic objects that need to be integrated with a scene. These objects are sometimes in a specific color space. For example, a physical sky simulator might produce a blue gradient with known CIE XYZ coordinates. Another example would be simulation of human hair or skin. These synthetically generated objects must then be converted into RGB values for the current working space, defined by the OCIO config. In the past, many applications assumed the working space was linear Rec.709, but that is no longer a good assumption. Inserting linear Rec.709 RGB values for a physical sky simulation into a renderer that is using ACEScg as the working space gives obviously wrong looking results.
Another example is with applications that integrate an SDK for a digital cinema camera. These applications receive decoded frames from media files that are in a specific color space. It may then be necessary to convert these frames into a color space defined in the current OCIO config used by the application.
Historically, OCIO has intentionally avoided providing a formal connection from color spaces in a config to known external color spaces such as CIE XYZ. In OCIO v2, two new "interchange" roles (aces_interchange and cie_xyz_d65_interchange) were added to facilitate use-cases as described above. However, to respect the original design intentions of OCIO, these roles were left optional and may not be present in v2 configs, let alone v1 configs.
As discussed in a number of threads with Larry, Zach, and others on Slack, it would be very helpful for the API to include a function that could be used to convert between some well known external color space, such as CIE XYZ or linear Rec.709, and a color space in a given config. If the interchange roles are present, these would be used. If not, there would be a series of heuristics that would be used to identify a known color space. These might be inaccurate in some cases, but the reality today is that application developers are already creating heuristics to solve these types of issues and there is value in at least having a standard set of heuristics that are documented and which could be improved over time, if needed.
For convenience, the function should offer a small number of scene-referred and display-referred color spaces that could be accepted. These would certainly include the current two interchange roles, but other spaces could be added, if desired. For example, scene-linear Rec.709 might be something many developers would appreciate for integrating OCIO with existing code.
The goal of the heuristics is to identify a known linear color space in the config. The heuristics could include the following:
- Assume the scene-referred and display-referred reference spaces are linear. Define a set of candidate reference spaces including ACES2065-1, ACEScg, linear Rec.709, and CIE XYZ.
- Try to find a color space whose from/to_ref transform is linear (i.e., only a Matrix, or set of Matrices) and check if the overall matrix matches one of a set of known matrices based on a set of common color spaces. The matching would be within some tolerance (and perhaps a scale factor?).
- If the above fails, the reference space may be using an unknown set of primaries. Look for strings such as "sRGB", "ACEScg", and "Rec*709" to identify the primaries being used by a color space. Then check if it has a linear relationship to the reference space.
- If the above fails, assume the rendering role (if present) is linear Rec.709. Otherwise assume the scene_linear role is linear Rec.709. Note that this may not be the most common value for those roles, but I'm thinking that the expected last fallback should be linear Rec.709. Any thoughts?
Once the heuristic has identified a color space in the config that is a known color space, the conversion process is basically the same as if the interchange roles are defined. The function could build a simple config with the defined conversion spaces, add the known color space from the other config, and then use one of the existing GetProcessorFromConfigs calls to create a Processor for the conversion.
Regarding the sub-problem of identifying a linear color space, see also issue #1399.
A more formal specification of the API and heuristics is TBD.
Suggestions are welcome, as is feedback and discussion on the proposed functionality.
This is awesome, it completely captures my concerns and use cases, explains it very clearly. On the renderer side, this would solve a number of problems that crop up.
I think this is totally reasonable. Thanks, Doug. Agreed that linear Rec.709 should be the last fallback, seeing as most renderers tend to assume those primaries, absent additional information.
Perhaps out of scope here, but I do think it would be wonderful if authors could attach optional chromaticities and chromatic adaptation transform (relative to implicit scene / display reference) metadata to Color Spaces. I feel like that would dovetail nicely if configs generated from OpenColorMath, if and when that becomes a thing.
(We're also not to far from the ten-year anniversary of this ticket: https://github.com/AcademySoftwareFoundation/OpenColorIO/issues/231 :) )
If we could provide CIE colourimetry it would be extremely useful. No need for “scene” vs “display” as that will likely become legacy. The CIE colourimetric definitions work for all colourimetry based downstream of it.
Would it not be sane to simply enforce a CIE_XYZ role and be done with it?
Why do you say "scene" vs "display" will likely become legacy? I'm not sure why you have that impression. The two reference spaces relate different image states, falling on either side of a View Transform (and are themselves related by a required default "colorimetry" View Transform). Even if "scene" and "display" color spaces were both defined relative to XYZ, there would still be a need to distinguish between pre- and post-View Transform states.
That said, unless I'm mistaken -- totally possible -- I think the elephant in the room is the ACES white point, and the arbitrary methods by which vendors implement adaptation from D65 -- i.e., if ARRI uses a CAT02 von kries, and RED uses a Bradford, and FilmLight just scales by an identity matrix, trying to implement non-ACES View Transforms in a config whose scene-referred color spaces are all defined with ACEScsc transforms necessarily means some paths make incorrect assumptions about how to "get back to" D65; and, likewise, there's only one default colorimetry path from the "scene-reference" to "display-reference". (I think Doug mentioned that SynColor provided a means for additional, alternate paths, but it meant drastically increasing graph complexity introducing potential path-finding ambiguity).
Why do you say "scene" vs "display" will likely become legacy?
Because ultimately the CIE colour model underlies all of this, and adding the “scene” vs “display” is just one arbitrary historical vernacular.
In the end, every single encoded value holds meaning, and that meaning is relative to the CIE XYZ foundation, not ACES.
ACES adds in some hand wavy “surround” rubbish, but it’s nothing beyond a power function atop CIE XYZ based colourimetry.
My point would be that folks can layer whatever historical rubbish they want atop of CIE XYZ colourimetry, so it would be incredibly sane to use that for all ground truth. Possibly even specifying revisions, such as 1931 CMFS vs 2012, etc., paving way for potential optional control.
You're conflating two separate concerns. Reread what I wrote above:
The two reference spaces relate different image states, falling on either side of a View Transform (and are themselves related by a required default "colorimetry" View Transform). Even if "scene" and "display" color spaces were both defined relative to XYZ, there would still be a need to distinguish between pre- and post-View Transform states.
So, as far OCIO goes, the difference between the "scene" and "display" reference spaces is not arbitrary historical vernacular -- it is not merely a distinction between some RGB encoding and XYZ. Here, "scene" and "display" refer to distinct stages in the rendering pipeline -- pre-DRT and post-DRT. That, canonically (in terms of BuiltinTransforms), one happens to be AP0 and the other XYZ is incidental to the purpose the separation of reference spaces serves.
And to be clear, there's no problem with authoring a config that uses XYZ for both the "scene" and "display" reference spaces -- you'd just have to specify an identity transform for the default "colorimetry" View Transform; and, indeed, one could choose not to implement Display Color Spaces and View Transforms at all, and author configs that relate all color spaces via a single reference.
Hit me up privately and I'll provide some examples.
Anyway, to button things up:
Would it not be sane to simply enforce a CIE_XYZ role and be done with it?
Yes, I do think that would be sane. But this ticket concerns configs that aren't defining a CIE_XYZ role, because that hasn't historically been a requirement.
I’m happy to discuss pre and post image formation. That’s not, as you know, a conflation or problem.
The point I would make is that both are properly and ultimately correctly expressed in terms of CIE colourimetry.
That, canonically (in terms of BuiltinTransforms), one happens to be AP0 and the other XYZ is incidental to the purpose the separation of reference spaces serves.
if it is incidental, then it’s a poor design choice. Use CIE colourimetry. Always.
Yes, I do think that would be sane. But this ticket concerns configs that aren't defining a CIE_XYZ role, because that hasn't historically been a requirement.
Which is where OpenColorIO could enforce a role.
Currently, the assumption on adjacent colour management workflows is “If unspecified, assume BT.709 primaries and transfer.” which would seem a sane entry point for OpenColorIO.
It provides a reasonable guess, and allows folks to specify the underlying XYZ colourimetry where required for software, in a single line.
Closing as implemented via PR #1710.