Colorspace inconsistencies.
I have created three MaterialX files that I believe should be visually equivalent, but I seem to be three different images when I render them with MaterialXView.
All three read the same image file, and then apply a scale factor of (0.5, 0.5, 0.5). The image file is read in srgb_texture colorspace and the scale factor is defined to use the colorspace lin_rec709. All three cases use lin_rec709 as the document colorspace.
MaterialX image and multiply node
<image name="image" type="color3" nodedef="ND_image_color3">
<input name="file" type="filename" value="default_texture.png" colorspace="srgb_texture"/>
</image>
<multiply name="mult" type="color3">
<input name="in1" type="color3" nodename="image"/>
<input name="in2" type="color3" value="0.5,0.5,0.5" colorspace="lin_rec709"/>
</multiply>
MaterialX image and multiply node inside a nodegraph
<nodegraph name="image_cc">
<input name="myFile" type="filename" value="default_texture.png" colorspace="srgb_texture"/>
<input name="myScale" type="color3" value="0.5,0.5,0.5" colorspace="lin_rec709"/>
<output name="out" type="color3" nodename="mult"/>
<image name="image" type="color3" nodedef="ND_image_color3">
<input name="file" type="filename" interfacename="myFile"/>
</image>
<multiply name="mult" type="color3">
<input name="in1" type="color3" nodename="image"/>
<input name="in2" type="color3" interfacename="myScale"/>
</multiply>
</nodegraph>
USDUVTexture node
<UsdUVTexture name="image" type="multioutput" nodedef="ND_UsdUVTexture_23">
<input name="file" type="filename" value="default_texture.png" colorspace="srgb_texture" />
<input name="scale" type="color4" value="0.5,0.5,0.5,1.0" colorspace="lin_rec709" />
</UsdUVTexture>
Some boilerplate is omitted from the examples above, but included in the attached zip file.
I believe it doesn't make sense to scale the color like this?
<input name="in2" type="color3" value="0.5,0.5,0.5" colorspace="lin_rec709"/>
I think you want to use "raw" as the color space here because you are intending to scale by a scalar value, not a color with an interpretation in a color space. I don't think that change will affect your examples, but it would be a better place to start working from.
In this document I believe setting to "raw" and "lin_rec709" would be equivalent, because the document color space is also set to "lin_rec709".
But is it not valid to specify a colorspace for any color input, explicitly defining what colorspace those input numbers are defined in? For instance here if someone changes the colorspace of the document we don't want the intended color value defined on the input parameter to change.
Or am I missing something here?
I think the issue comes as a result of where the colorspace transform nodes are being applied, when requested. From what I understand when a colorspace transform is applied for a filename, then the colorspace conversion node is applied after the node that the filename is exposed from.
A picture is worth a thousand words as they say....
MaterialX image and multiply node
graph LR;
A("image");
A --> B("color_transform");
B --> C("multiply");
MaterialX image and multiply node inside a nodegraph
graph LR;
subgraph nodegraph;
direction LR;
AA("image");
AA --> BB("multiply");
end;
nodegraph --> CC("color_transform");
USDUVTexture node
graph LR;
AAA("`UsdUVTexture
(incl multiply)`");
AAA --> CCC("color_transform");
We have already added support to the OSL code generator to process the colorspace transformation at the site of the texture read call. I think we should explore ideas about how to introduce a similar concept to the other shader generators, instead of applying any sort of post-node color transform that is only ever going to be correct for the native MaterialX image nodes, and rarely correct for USD texture nodes or other nodegraph encapsulations of image nodes.
This sounds similar to the issue https://github.com/AcademySoftwareFoundation/MaterialX/issues/2181 we experienced in 1.38.10 when switching from 1.38.8. We discovered that nodes with internal images would have the transformation applied multiple times.
I revisited this yesterday when testing 1.39.3 release and it seemed to be fixed for OSL shadergen. But based on your comment above @ld-kerley, is this only fixed when generating for OSL?
We have already added support to the OSL code generator to process the colorspace transformation at the site of the texture read call.
@bowald I think this is slightly different to your concern regarding double application of the color transform. I'm no color scientist, but I think MaterialX is currently applying the color transform in the wrong place for textures. Instead of after the node that contains the filename parameter, I believe the color transform should be applied at the point of the image read. This would ensure that any operations performed internal to the node would happen on the correctly transformed color read from the texture.
I'll not that there is another improvement for the OSL code generation in this regard, that we actually pass the colorspace to the OSL texture() now (if you're using a new enough OSL library), and then the transform is applied by OSL itself. We do not do this for any other languages yet, but hopefully we can find the correct way to do this.
I just spent a couple hours debugging, until I realized I also ran into this issue.
The visual difference is more drastic here, because the sampled data is processed further – e.g. has contrast applied to it.
TextureSamplingColorspaceBug.zip
In the attached file, you can see a difference between:
A: Direct sampling:
This results in the following GLSL code:
vec4 tex_out = vec4(0.0);
mx_image_color4(_Map, 0, vec4(1.000000, 1.000000, 1.000000, 1.000000), vector4_to_vector2_out, 2, 2, 1, 0, 0, 0, vec2(1.000000, 1.000000), vec2(0.000000, 0.000000), tex_out);
vec4 tex_out_cm_out = vec4(0.0);
NG_srgb_texture_to_lin_rec709_color4(tex_out, tex_out_cm_out);
B: Sampling in a subgraph:
This results in the following GLSL code:
vec4 TextureGraph_Out = vec4(0.0);
NG_TextureGraph(_Map, vector4_to_vector2_out, TextureGraph_Out);
and
void NG_TextureGraph(sampler2D _Map, vec2 _UV, out vec4 Out)
{
vec4 tex_out = vec4(0.0);
mx_image_color4(_Map, 0, vec4(1.000000, 1.000000, 1.000000, 1.000000), _UV, 2, 2, 1, 0, 0, 0, vec2(1.000000, 1.000000), vec2(0.000000, 0.000000), tex_out);
vec4 color4_to_vector4_out = vec4(0.0);
NG_convert_color4_vector4(tex_out, color4_to_vector4_out);
Out = color4_to_vector4_out;
}
It seems like there is no color conversion at all in case B.
I also noticed that colorspace seems to be differently applied in MaterialXGraphEditor, depending on whether I click on the surfacematerial or a nodegraph that returns a surfaceshader, which does not feel correct:
I'd like to bump this issue as I'm not aware of a workaround, and it basically breaks all subgraph-focused workflows as far as I can tell.
I think this is just a fundamental limitation of applying the colorspace as a node in the graph. It is always going to be possible to have a node that applies some transformation on the color thats read from a texture before passing it out of a node. Subgraphs are one case - which we may perhaps be able to fix with some shader generator updates, but if we consider the case of a custom node implemented in source code. There would never be a way to inject the color transformation in the correct place (directly after the texture read).
I actually think the OSL approach is more correct here - and that MaterialX should not try to perform the color transforms as part of the nodegraph, and instead pass the colorspace to the texture reading subsystem, and apply the transformation there.
This would be a pretty sizable change - and something we would want to discuss with all the stakeholders of the project. It feels like a good topic for the MaterialX TSC to discuss.
@ld-kerley It's my sense that our default color space management system, which injects color transformations into the shading code rather than modifying textures in place, is a useful workflow for many teams, and is worthy of additional code improvements!
Just for some clarification - I'm not proposing modifying any texture files in place, or even in memory - but instead I think we would be able to have a more robust color management system for texture reads if the code doing the reading of the textures was made colorspace-aware and did the color transformation directly at the point in the shader code where the texture is read.
Certainly the current system is useful for a subset of possible MaterialX use-cases, and I'm not proposing removing that, though it would be helpful to be able to disable the color management of the texture separately from the input values, and investigate ways to allow for direct color management at the site of the texture read.
Note: I believe currently in the OSL case if the node-base color management is performed, we may even be applying the color transform twice - one in the OSL texture call and once in the inserted node.
I'd like to bump this issue as I'm not aware of a workaround, and it basically breaks all subgraph-focused workflows as far as I can tell.
For this particular item, selecting the graph or the downstream surface shader should both give the same result -- at least it used to so this seems like a regression in behaviour.
For the input on the nodegraph which has a colorspace, @ld-kerley , the diagram you have should not be what occurs. The colorspace should be applied to the image node not after the fact. If it is doing this then it's a bug. The UsdUVTexture case appears to be similar in that the transform should also not be done after but within the UsdUvTexture graph. i.e. seems like the same bug?
@ld-kerley, it would be interesting to add meta-data but I guess the question is where it resides? OSL (or any code generator) can insert shader transforms, but I believe the code generator should turn off adding any colorspace meta-data to the OSL image lookup -- if not it's a bug.
@ld-kerley If such a bug exists in our OSL shader generation, we should definitely address it!
Looking at the latest GLSL/OSL render comparisons in our test suite, though, I'm not seeing evidence of such a bug being present, at least in the open shader generators that we ship with MaterialX:
@jstone-lucasfilm, the test suite turns off shader transform insertion by default. I didn't see any firewall test if it's on so I think you can get double transforms if you turn it on.
I also noticed that you made some improvements for interfacename connection search for colorspace but there could be more edge cases to check on.
I think the correction needs to be done in graph, rather than at the renderer. It seems like if you want to do conversions at renderer load time you would introduce a need for caching, ie cat.exr becomes cat.rec2020-exr, so you'd need a look aside. The reason I think you would need a cache is because there might be more than one reference to cat.exr, and if you mutate it, then it will be surprising for other systems that do not expect a on-the-fly mutated file.
My own experience with renderers that mostly take responsibility for image loading (ie, they attempt to bypass Hydra's loader) lead to surprising results where some things work in some contexts and not other.
Also, I think preconversion could get really hairy with streamed, or virtualized textures.
My feeling is that the guidance could be that it's better to preconform textures that need color space mutation, and otherwise pay the cost in the mx sampling function itself?
That's a great analysis, @meshula, and it closely matches my own experiences with streaming and virtualized texture systems in real-time engines.
@kwokcb Let me know if I'm misremembering, but I believe our render test suite applies graph-based transforms through the default color management system, and we disable the OSL-specific color space attributes. That allows us to compare graph-based transforms across GLSL, OSL, and other languages, without any interference between the default color management system and language-specific color space features.
Yes. My mistake. Still might want to put in guards for renderers that parse the colorspace image meta-data. Think I used to do this for Arnold.
I have a contrived toy example to share, but it is derived from a real production case. (note this might quite be legal Mtlx - I wrote it from memory with out validation).
<materialx version="1.39">
<nodedef name="ND_texture_color_multiply" node="texture_color_multiply">
<input name="filename" type="filename" value="" />
<input name="color" type="color3" value="1.0, 1.0, 1.0" />
<input name="texcoord" type="vector2" defaultgeomprop="UV0" />
<output name="out" type="color3" />
</nodedef>
<implementation name="IM_texture_color_multiply_glsl" nodedef="ND_texture_color_multiply" file="texture_color_multiply.glsl" target="glsl"/>
</materialx>
with the GLSL source
vec3 mx_texture_color_multiply(vec2 texcoord, sampler2D tex_sampler, vec3 color)
{
vec3 tex_color = texture(tex_sampler, texcoord).rgb;
// for this to be correct I think the colorspace correction needs to be
// applied here - before the multiply
vec3 result = tex_color * color;
return result;
}
This is the case where it's unclear to me how we can correctly handle the colorspace transformation. Currently the same is also true of a nodegraph based implementation, but I think perhaps there might be a way to refactor the shader generator to account for that case.
What I was trying to poke at is the idea we might be able to insert the colorspace transformation code at the location of the comment in the GLSL code above somehow. I'm not a big fan of the text substitution system in the shader generator, but perhaps that could be used here to insert the appropriate transform.
vec3 mx_texture_color_multiply(vec2 texcoord, sampler2D tex_sampler, vec3 color)
{
vec3 tex_color = texture(tex_sampler, texcoord).rgb;
tex_color = $colorTransform(tex_color);
vec3 result = tex_color * color;
return result;
}
where $colorTransform might be the code generated the appropriate color transform materialx node.