Custom shaders for models and 3D Tiles
We're in the process of refactoring the glTF / model system and one of the end goals in the next few months is to add support for custom shaders similar to what you'd find in other engines (see Three.js ShaderMaterial and Bablyon.js ShaderMaterial). This will give developers full control over visualization of their models and 3D Tiles.
For background CesiumJS already has various levels of support for custom shaders:
- Fabric for primitives
- Post processing stages
- glTF 2.0 models with the KHR_techniques_webgl extension
- Declarative Styling - not strictly custom shaders, but custom styling of features based on their metadata
With 3D Tiles Next around the corner we have new methods of storing metadata including per-texel metadata that are ready to be unlocked.
Approaches
Two possible approaches for supporting custom shaders are described below:
- Add a custom shader callback to the fragment shader that has well defined inputs: vertex attributes, custom uniforms (including textures); and well defined outputs: color, show. Attributes that are unused can be automatically disabled (for example if normals are unused they'll be disabled in the
VertexArray). We could also add a callback to the vertex shader where outputs might be position and point size, and maybe some abstraction for varyings. This is roughly similar to Fabric and Post processing stages. - Give full control over model building - creating textures, buffers, shaders, uniforms, etc. This offers the most control but is probably overkill and would likely require exposing the private
RendererAPI. This is similar to KHR_techniques_webgl.
For now I'm leaning towards option 1. A rough example might be
// Color fragments based on a per-vertex classification
// a_classification and a_normal are available as vertex attributes
// u_classificationCount and u_classificationTexture are available as custom uniforms
// czm_pbrMetallicRougness and czm_lightDirectionEC are available as global built-ins
float featureSt = vec2(a_classification / u_classificationCount, 0.5);
vec4 classificationColor = texture2D(u_classificationTexture, featureSt);
color = czm_pbrMetallicRougness(classificationColor, czm_lightDirectionEC, a_normal, ...)
Engine Architecture
Already in progress, see https://github.com/CesiumGS/cesium/pull/9517
- Separate glTF loading from internal model representation
- Add low-level model builder to the private API
- Convert b3dm, i3dm, and pnts tile formats to the internal model representation at runtime
- Unify metadata into
FeatureMetadata - Expose model materials in the public API, including setters for custom shaders (like the one above) and custom uniforms.
Questions
We're still early in the design phase and there's many open questions
- [x] How would custom shaders be exposed at the
Cesium3DTilesetlevel, particularly for heterogeneous tilesets where not all contents share the same vertex attributes? - [ ] How is feature metadata made accessible to shaders?
- [ ] How does this affect derived commands (e.g. shadows)?
- [ ] What GLSL version is supported? Probably GLSL 1.0 to start. In the future we might need a meta shading language.
- [ ] Are
czm_built-in functions enumerated anywhere? - [ ] What's the long term plan for unifying fabric, post processing, and model shaders, as well as adding custom shaders to other systems like terrain/imagery?
Related issues
- https://github.com/CesiumGS/cesium/issues/7652
- https://github.com/CesiumGS/cesium/issues/5094
- https://github.com/CesiumGS/cesium/issues/2387
@ptrgags @sanjeetsuhag Here's a summary of where I left off in the Model.js refactor, file by file. Hopefully this will clarify the direction I was going. At least you'll know which files to look at and which to ignore. Let me know if you have any questions, even about the smallest details (because there are a lot of important details).
https://github.com/CesiumGS/cesium/tree/model-loading
CustomShader.js
Follows option 1 above.
CustomShader.fromShaderString takes a shader string created by the user and generates the full custom shader code. This is not the final shader used by Model, just a piece of it.
The shader uses four input structs: Input, Attribute, Uniform, and Property
And one output struct: Output
Input- contains well known inputs to the shader likeinput.position,input.normal, etc. The full list of inputs is inInputSemantic.js. These are derived from vertex attributes similar tomaterialInput.glsl. @IanLilleyT will be adding more semantics here.Attribute- these are the raw vertex attributes fromModelComponentslikeattribute.POSITIONandattribute.NORMAL. There are subtle differences between this and input. For example, if the glTF has aTANGENTattributeattribute.TANGENTwould be a vec4 (.w stores the handedness as defined by the glTF spec) whereasinput.tangentwould be a vec3 as there is a separateinput.bitangentderived from the handedness. This struct is mainly useful for accessing attributes that are not ininput.Uniform- has all the user-defined uniformsProperty- has metadata properties. The code that populates this struct goes outside the custom shader, which I did not start. https://github.com/CesiumGS/cesium/issues/9572 is involved in that.Output- has three properties that be set within the shader:color,show, andpointSize(vertex shader only)
CustomShader.fromShaderString parses the shader and returns information about it, basically what attributes, uniforms, and properties are used so that the model can optimize what data it sends to the GPU. It also tells model whether the custom shader is applied in the vertex or fragment shader. There's a pretty big decision tree for that and it gets even more complicated in CustomShader.fromStyle.
CustomShader.fromStyle takes a Cesium3DTileStyle and converts it into a custom shader. Actually it doesn't always create a custom shader, sometimes it determines that CPU styling is better (like if string properties are used). See the top comment in the code for more details.
There is a long TODO list at the top of the file, but overall this file is nearly complete from my perspective, though I think the API could be organized differently, and the shader structs could be renamed or consolidated in different ways. At some point I'll need to go through TODO list and make more sense of it.
InputSemantic.js
Related to CustomShader.js. Also nearly complete from my perspective. Needs a better name.
ModelShader.js
This was the first iteration of the shader cache before I went with a different approach. For the most part it can be ignored.
ModelShaderTemp.js
This was the second iteration that was never finished. This file is meant to incorporate a lot of different systems to build the final model shader. I made the most progress on vertex attributes. Probably best to just reference this file rather than build on top of it.
ModelMaterialInfo.js
Gets information about the PBR material. Sees what textures, uniforms, and attributes are needed for the shader. This file is pretty close to complete from my perspective. It's a building block for ModelShaderTemp.js.
NewModel.js
This is Model.js 2.0. A lot of the code was moved into other files but the code and comments for quantized attributes and meshopt are still very relevant.
ModelVS.glsl
This needs to be replaced with a shader builder. I started to do that in ModelShaderTemp.js. Generally the logic is good but the new shader builder should support any number of texture coordinate sets, not just TEXCOORD_0 and TEXCOORD_1. The morph targets approach should also be rethought.
ModelFS.glsl
Also needs to be replaced, but the logic is pretty good.
ModelShading.glsl
Gathers PBR textures and uniforms and calls czm_pbrMetallicRoughness or czm_pbrSpecularGlossiness. It can be called from the vertex shader or fragment shader. Good for reference.
@sanjeetsuhag Put together a local Sandcastle to see what a very basic CustomShader.fromString() example (just set output.color to red) looks like.
EDIT: there's a Check.typeOf.object() in CustomShader that should be typeOf.string. I just pushed a commit to model-loading to fix this.
One thing I notice is that when I use input.position or other in the shader (without a proper primitive), it doesn't throw an error, but the page starts to hang, so we'll need to avoid that in a final design.
There's plenty of other design questions I have from this, but we'll discuss this tomorrow on a call.
Custom Shaders API Mockups
I started thinking about ideas for the public interface to custom shaders. I'll provide several options for discussion.
Part A: Shader Definition
Option A1: Callback function
This first option is have the user define callback functions for the vertex shader and the fragment shader. This is very similar to the approach @lilleyse started. The input to each will be a big automatically generated struct. The goal here is to abstract away the internal details of the renderer, which can get a bit hairy (especially once we get into GPU styling of metadata).
This first version even uses automatically generated structs for varyings, which would have to be declared when constructing the shader:
/**
* // Struct definitions:
*
* // Automatically generated from primitive's attributes
* struct Attribute {
* vec3 position;
* vec3 normal;
* vec2 textureCoord0;
* // ...
* }
*
* // Automatically generated from uniform map
* struct Uniform {
* float time;
* }
*
* // Automatically generated from 3D Tiles batch table,
* // 3DTILES_metadata or EXT_feature_metadata.
* // If a property is used in the shader body but not supported
* struct Property {
* float intensity;
* //...
* }
*
* struct VertexInput {
* Attribute attribute;
* Uniform uniform;
* Property property;
* }
*
* // Automatically generated from varying map
* Varying Varying;
*
* struct VertexOutput {
* vec4 position; // gl_Position
* float pointSize; // gl_PointSize
* Varying varying;
* }
*/
// ShaderToy-esque style function abstracts away internal rendering details
// Note: CesiumJS still uses ES5 internally, but in these example usage I'm
// using ES6 syntax for brevity.
const vertexShader = `
float wave(float time) {
return 0.5 * sin(2.0 * czm_pi * 0.001 * time);
}
void vertexMain(in VertexInput input, out VertexOutput output)
{
vec3 position = input.attribute.position;
position.z += wave(input.uniform.time);
// czm_ built-ins are available
output.position = czm_modelViewProjection * vec4(position, 1.0);
output.varying.uv = input.attribute.textureCoord0;
output.varying.normal = input.attribute.normal;
output.varying.color = input.attribute.color;
output.varying.secondaryColor = input.attribute.secondaryColor;
}
`;
/**
* struct Uniform; // same as in vertex shader
* struct Property; // Same as in vertex shader
* struct Varying; // same as in vertex shader
*
* struct FragmentInput {
* Varying varying;
* Uniform uniform;
* Property property;
* }
*
* struct FragmentOutput {
* vec4 color;
* bool show;
* }
*/
const fragmentShader = `
void fragmentMain(in FragmentInput input, out FragmentOutput output)
{
vec3 color1 = input.varying.color;
vec3 color2 = input.varying.secondaryColor;
vec3 plusZ = vec3(0.0, 0.0, 1.0);
vec3 color = mix(color1, color2, dot(input.varying.normal, plusZ));
output.color = vec4(output.color);
output.show = input.property.intensity > 0.6;
}
`;
The corresponding setup code looks like this:
const startTime = performance.now();
// THREE.js-style uniforms. Include the type so we don't have to
// infer this
const uniforms = {
time: {
value: startTime,
type: UniformType.FLOAT
}
}
// TODO: Should we declare varyings or just require the user to do so?
// varyings don't need a value, but still are declared
// the caller is responsible for setting these in the vertex shader and
// reading them in the fragment shader
const varyings = {
uv: VaryingType.VEC2,
normal: VaryingType.VEC3,
color: VaryingType.VEC3,
secondaryColor: VaryingType.VEC3
}
const shader = new CustomShader({
uniforms: uniforms,
varyings: varyings,
vertexShader: vertexShader,
fragmentShader: fragmentShader
});
Option A2: Let the user define the varyings
This is mostly the same as option A1, but now the user defines the varyings themselves. This is what most custom shader APIs do. It also means that no varyings need to be declared in JS, which is a nice benefit
/**
* // Note the lack of Varying
* struct VertexOutput {
* vec4 position; // gl_Position
* float pointSize; // gl_PointSize -- used with gl.POINTS only
* }
*/
// ShaderToy-esque style function abstracts away internal rendering details
const vertexShader = `
// user is responsible for defining varyings and making sure they match
// from vertex to fragment shader
varying vec2 v_uv;
varying vec3 v_normal;
varying vec3 v_color;
varying vec3 v_secondaryColor;
float wave(float time) {
return 0.5 * sin(2.0 * czm_pi * 0.001 * time);
}
void vertexMain(in VertexInput input, out VertexOutput output)
{
vec3 position = input.attribute.position;
position.z += wave(input.uniform.time);
// czm_ built-ins are available
output.position = czm_modelViewProjection * vec4(position, 1.0);
v_uv = input.attribute.textureCoord0;
v_normal = input.attribute.normal;
v_color = input.attribute.color;
v_secondaryColor = input.attribute.secondaryColor;
}
`;
/**
* // Note the lack of Varying
* struct FragmentInput {
* Uniform uniform;
* Property property;
* }
*/
const fragmentShader = `
varying vec2 v_uv;
varying vec3 v_normal;
varying vec3 v_color;
varying vecr v_secondaryColor;
void fragmentMain(in FragmentInput input, out FragmentOutput output)
{
vec3 color1 = input.varying.color;
vec3 color2 = input.varying.secondaryColor;
vec3 plusZ = vec3(0.0, 0.0, 1.0);
vec3 color = mix(color1, color2, dot(input.varying.normal, plusZ));
output.color = vec4(output.color);
output.show = input.property.intensity > 0.6;
}
`;
const startTime = performance.now();
// THREE.js-style uniforms. Include the type so we don't have to
// infer this
const uniforms = {
time: {
value: startTime,
type: UniformType.FLOAT
}
}
// Note the lack of varyings this time.
const shader = new CustomShader({
uniforms: uniforms,
vertexShader: vertexShader,
fragmentShader: fragmentShader
});
Option A3: Declare uniforms with a method
One option we briefly considered is to have methods to declare the types of uniforms before attaching the shader to the Model. However, we don't think this is good because it's too easy to call the methods in the wrong order. Passing things into the constructor would be better.
// this time create the shader first
const shader = new CustomShader({
vertexShader: vertexShader,
fragmentShader: fragmentShader
});
// declare uniforms before attaching to a primitive
const startTime = performance.now();
shader.declareUniform(UniformType.FLOAT, "time", startTime);
// now we can pass the shader to a Model or Tileset
Option A4: Raw Shader
Instead of the above callback method, we could have the user define a whole shader, attributes, uniforms and all. Most libraries do this and gives the user maximal control. However, there are some big caveats:
- We'd need extensive documentation about what attributes/uniforms are available
- The user would have to know about all the metadata attributes/uniforms which will be dynamically generated
- To combine the custom shader with other pre/postprocessing steps, we'd have
to do the old method of regex replacing
main() -> xxxxMain()and inserting a new main function to wrap it. It works but not very elegant.
Part B: Attaching the Shader to a Model/Tileset
Option B1: Pass it in in the constructor
This is the simplest option, pass the shader in once at the constructor.
// example construction via entities
viewer.entities.add({
model: {
customShader: shader
//...
}
//...
});
// creating a Model directly
const model = new Model({
//...
customShader: shader
});
// Constructing a tileset. This shader will be propagated
// from Cesium3DTileset -> Cesium3DTile -> Cesium3DTileContent -> Model
const tileset = new Cesium3DTileset({
//...
customShader: shader
});
Option B2: Have a setter
Another method is to not bog down the constructor with more options (models and tilesets already have a lot) and set the custom shader afterwards. This would also imply the custom shader should be hot-swappable. (Though I think existing styles work like this?)
const entity = viewer.entities.add({
//...
});
entity.customShader = customShader;
const model = new Model(/*...*/);
model.customShader = shader
const tileset = new Cesium3DTileset(/* ... */);
tileset.customShader = shader;
Option B3: B1 + B2
We could also do both.
Part C: Updating values at runtime
Option C1: functions to set uniforms
This is a pretty straightforward option, have methods on the shader
to update uniforms on the fly. This is similar to how p5.js does this. Simple and gets the job done.
function update() {
// p5.js-style update functions. Uniforms must match one declared
// in the constructor
shader.setUniform('time', performance.now() - startTime);
}
These methods would only work for setting variables declared at shader creation time.
Option C2: ES6 would enable other options
Moot point for now since CesiumJS still doesn't support ES6 features, but
if we did have things like Proxy, we could make the updates more natural
(albeit perhaps too magical)
function update() {
shader.uniforms.time = performance.now() - startTime();
}
Other Notes
About Attributes
One thing we considered was whether to allow setting additional attributes at runtime beyond the glTF itself. However, we want to avoid this for a couple reasons:
- It's not well-defined. A tileset is made up of multiple tiles, and glTF models may have more than one primitive. If there's only one shader, you'd need to set many attribute arrays at once. It's unclear how to do this.
- If we want to update existing values, this means keeping typed arrays around on the CPU even after uploading to the GPU. This could negatively impact performance.
Another detail is attributes in a glTF use SCREAMING_SNAKE_CASE which
can be cumbersome to look at. We might want to provide rules for automatically
converting variable names to camelCase equivalents, or provide a method for
aliasing attributes.
How does this interact with materials?
Another thing to consider is how will this interact with materials? there's a couple scenarios:
- The custom shader replaces the material for full custom behavior. This is good for user-specific use cases.
- The custom shader uses the computed material color as an input, so this becomes a post-processing stage. This might be good for say tinting a texture.
- The custom shader computes the material base color as an output, becoming a pre-processing stage. This would be good for procedural texturing.
We might want to make this configurable. We don't want to go to the complexity of a full node graph, but we could certainly select between these three methods.
CC @lilleyse, @sanjeetsuhag
This is heading in a great direction. Support for varyings was a key part missing from my original proposal and I'm seeing the benefits of it.
I prefer option A1 over A2. I feel that varyings should be abstracted away since WebGL 1 and 2 have different syntax for it. But that's not the only reason, I just think it goes outside the custom shader sandbox.
Are all varyings user defined? I figured the custom shader would be able to call a glTF PBR material function that handles the plumbing for attributes, textures, etc used for PBR. Of course the user can write their own PBR code and ignore our implementation if they want, but the one liner would be super convenient, and it's only convenient if the plumbing happens in the background.
We should also think of ways to simplify the blending process and make it a little less fixed function. Maybe a PBR struct is autopopulated outside the custom shader and the custom shader can modify it before passing it along to the PBR function. (I just read your final section, configurable is better and I think it can be done relatively simply)
I assume input.attribute.position is object space, but is it pre or post morph targets / skinning? I think post...
Should gl_Position or discard in custom shaders be allowed?
Should VertexOutput have a show property too?
Do you think glTF 1.0 can be decomposed to this system? It'll probably end up looking more like Option A4 but I can hope.
:+1: for Option B3. We definitely want hot-swapping. The constructor option is nice too.
@lilleyse Yeah originally I was leaning towards A2, but after our discussion on Friday, I do think having automatically-generated varyings would be good.
Are all varyings user-defined?
No, this would just be for the varyings the user wants to define. There would likely be built-in ones.
In regards to the PBR handling, based on discussions on Friday and yesterday, I'm thinking that custom shaders (at least the frag shader) should both take a Material as input and output a Material. This way, the custom shader can be moved around the pipeline depending on the configuration settings, without having to change the shader code itself.
void fragmentMain(in Input input, in Material inMaterial, out Material outMaterial) {
outMaterial.baseColor = mix(inMaterial, input.uniform.tintColor, 0.5);
outMaterial.normal = perturbNormals(inMaterial.normal);
// etc.
}
This is inspired by Unreal Engine's node editor; Defining a material involves connecting nodes to a big struct with baseColor, metallic, roughness, specular, etc. However, you can change the resulting behavior by selecting the lighting model. See Unreal Shading Models documentation page for more information.
As far as lighting goes, I think we should have a built-in lighting stage that comes after all the material/styling/custom shader processing. It would be configurable to have any of the following lighting methods:
PBRfor glTF 2.0 materialsBLINN,PHONG,LAMBERTforKHR_materials_commonsupport (glTF 1.0 extension)UNLITwhich would just rendermaterial.baseColordirectly. This satisfiesKHR_materials_common'sCONSTANTlighting model, and also allows custom shaders to bypass the lighting model if they want to do something custom (e.g. non-photorealistic rendering).
As far as gl_Position/discard goes, we could either check for them and disallow them, or we can just leave it to the user to use at their own risk. gl_Position would most likely get overwritten anyway. discard is another story
I still need to think about how to handle glTF 1.0/KHR_techniques_webgl. While internally it may use the same Material struct as output, I don't necessarily think it should be forced into a custom shader function.
Yesterday, I also investigated what other engines do as far as custom shaders for comparison. I explored a few:
Three.jsadds some boilerplate code for helper functions, attributes, uniforms, etc at the top, but then inserts your code verbatimp5.js's WebGL mode lets you write the entire shader, but then you need to make sure you declare attributes correctly based on what the engine passes in. This is not very well documentedBabylon.jshas a whole Node material editor that generates code for you. It creates just one bigmainfunction where each node writes to a generated variableoutputNN.

My thoughts on the above:
- For our use case, a raw user-defined shader is not a good approach. Our renderer has a lot of dynamically-generated data (especially where metadata is concerned!). Furthermore, the renderer is part of the private API so we don't to expose too much to the user.
- I also see that in these cases the code is concise, no wrapping the
main()function over and over again. We should think about that as we continue to design Model.js
One caveat @sanjeetsuhag and I realized two things:
- you can't store a
sampler2Din a struct, so you couldn't doinput.uniform.texture - When a user declares a uniform, this corresponds to a
uniform type identifier;statement in the shader. So there's not much benefit for putting them into aUniformstruct abstraction.
At least for user-defined uniforms (not sure about internal uniforms), I'm leaning towards keeping them top-level instead of adding them to the Uniform struct. This is both simpler to implement and simpler to use, as textures and other values would be treated the same way.
@lilleyse what do you think? what other uniforms would go in Uniform besides ones from CustomShader?
- you can't store a
sampler2Din a struct, so you couldn't doinput.uniform.texture
Is that true? https://stackoverflow.com/a/54959746 shows an example with a sampler2D in a struct
- When a user declares a uniform, this corresponds to a
uniform type identifier;statement in the shader. So there's not much benefit for putting them into aUniformstruct abstraction.
I think the abstraction is useful for consistency with attributes and metadata
@lilleyse what do you think? what other uniforms would go in
Uniformbesides ones fromCustomShader?
I think it would just be the uniforms set by the user.
Though there might be a need for built-in uniforms like model matrix or light direction/color. Some of those are accessible as czm_ properties. I wonder what other engines do here.
Some notes from talking with @lilleyse this morning:
- The custom vertex shader should continue to be from
model -> modelspace. A few reasons why:- World space results in precision issues
- also if you start moving vertices around drastically in world space, this would require updating bounding volumes significantly. In some cases this could cause performance problems because this could break the assumptions of a tileset's bounding volume hierarchy (in that parent bounding volumes must completely contain their children)
- Reference frames we want to make available to the fragment shader (at least to start)
- Model space position
- Cartesian position (caveat: 32-bit float precision)
- Cartographic position (caveat: 32-bit float precision)
- View space position
- implicit tile coordinates would be stored separately in a 3d-tiles related struct
- heterogeneous data: if a primitive is missing an attribute needed in a shader, log a warning and use a default value instead
- alpha handling: CustomShader should have an isTranslucent flag that enables the translucent pass. Or perhaps select between translucent and mask
@ptrgags @lilleyse
also if you start moving vertices around drastically in world space, this would require updating bounding volumes significantly. In some cases this could cause performance problems because this could break the assumptions of a tileset's bounding volume hierarchy (in that parent bounding volumes must completely contain their children)
I tried to update the vertex position in CustomShader (raising the vertex coordinates upwards), but it seems that the bounding volumes of the model has not been updated. Another issue is that some areas of the model have been culled by the camera. Is there any way to update the bounding volumes? Even if it is not so accurate, it is acceptable, at least it will not be culled by the camera.
@syzdev I believe a use case like this is beyond the scope of a custom vertex shader. Are you looking to exaggerate tileset height only? In that case, https://github.com/CesiumGS/cesium/issues/8809 is under development now, and would update the bounding volumes.
@ptrgags or @lilleyse Is there anything immediately actionable in this issue? Otherwise I think this should be closed.
@ggetz I agree, this is an old issue. Anything that remains for custom shaders has a more specific issue at this point. Closing.
@ggetz
I agree with your opinion that updating the bounding volumes is indeed not something that CustomShader is concerned about. But in some special use cases, the position of the vertex may not move in a fixed direction, as in Custom Shaders Models - Expand Model via Mouse Drag, the model unfolds along the normal direction.
Although it is not related to the CustomShader, we have to face this issue. Ceisum does not seem to expose a method to modify the bounding volumes. Can it be useful to forcibly modify the parameters of the bounding volumes in the source code? Of course, this is only a temporary method to solve the urgent problem.