Ambient Occlusion doesn't work!
Sandcastle example:Ambient Occlusion
Browser:Chrome
Operating System: macos
when i check ambient occlusion,full screen change to black and can not look anything!
@happyfeet1996 I'm unable to reproduce on MacOS in Chrome. Can you provide some more details about your system by visiting this page and taking a screenshot?
@happyfeet1996 I'm unable to reproduce on MacOS in Chrome. Can you provide some more details about your system by visiting this page and taking a screenshot?
![]()
![]()
![]()
it works today,maybe my network go wrong that day
Reopening this issues since it does seem to occur on certain hardware. On my MacBook Pro (16-inch, 2019) running macOS 12.2.1, I see the following results:
| Browser | Sandcastle | WebGL Report |
|---|---|---|
| Chrome 100.0.4896.60 | ![]() |
![]() |
| Firefox 98.0.2 | ![]() |
![]() |
| Safari Version 15.3 (17612.4.9.1.8) | ![]() |
![]() |

I also reproduce the same issue on macOS Chrome with eGPU. Specifying fragment alpha in AmbientOcclusionModulate seems to resolve it.
- out_FragColor.rgb = ambientOcclusionOnly ? ao : ao * color;
+ out_FragColor = vec4(ambientOcclusionOnly ? ao : ao * color, 1.0);
Thanks for the info @shotamatsuda.
@jjhembd Could you please verify the potential fix in the comment above, and recommend a PR if appropriate?
Hi @shotamatsuda, thanks for the tip. However, on my machine (Chrome on Windows), I get the same image even if I set the alpha to 0.0
out_FragColor.rgb = ambientOcclusionOnly ? ao : ao * color;
out_FragColor.a = 0.0;

Does zero alpha also work on your machine, or is the 1.0 value important?
@jjhembd In my specific hardware, setting the alpha to zero results in a black screen, as well as not setting the value at all.
The importance of 1.0 depends on the blending function of the framebuffer. I haven't fully traced where blending is applied for framebuffers used in the post-processing stage, but setting true as the default value for PassState.blendingEnabled also resolves the issue.
https://github.com/CesiumGS/cesium/blob/1.104/packages/engine/Source/Renderer/PassState.js#L36
This suggests to me that the blending function might not be defined, leading to a hardware or driver-dependent issue. Specifying the alpha value explicitly compensates for it.
For the record, my setup is macOS 13.3.1, Chrome 112.0.5615.49, and AMD Radeon RX 6800 XT GPU. I encountered the issue before Chrome 100 on macOS Monterey with the same eGPU, and it doesn't reproduce in Safari or Firefox. I first noticed the issue around Cesium 1.89.0, but I'm unsure if it is caused by the Chrome version update.
Hi @shotamatsuda, thanks again for all the info.
I think it was a mistake in AmbientOcclusionModulate.glsl to leave out_FragColor.a undefined. We don't do this in any other shaders.
There are two input textures in AmbientOcclusionModulate:
colorTextureis the rendered scene before the ambient occlusion stage.ambientOcclusionTextureis the shading factor computed inAmbientOcclusionGenerate.glsl
ambientOcclusionTexture has its alpha value set to a constant 1.0. I think it would make sense to preserve the alpha of colorTexture (which will usually also be 1.0, but not always). This would simplify AmbientOcclusionModulate to read as follows:
vec4 color = texture(colorTexture, v_textureCoordinates);
vec4 ao = texture(ambientOcclusionTexture, v_textureCoordinates);
out_FragColor = ambientOcclusionOnly ? ao : ao * color;
The above code gives the same result as before on my setup. Can you please verify if it works on your hardware? If so, we should be ready for a PR.
@jjhembd Yes, I confirmed it works with my hardware, and that should solve OP's issue. Deriving alpha from colorTexture makes sense to me.
Requested on the forum: https://community.cesium.com/t/improve-ambient-occlusion/9639/4
Major problem is in the quality of stochatic noise and the lack of denoise. I made a library that implements Nvidia's HBAO and cross-bilateral filtering for Cesium. It's not a generic library, but you will get the concept. https://github.com/takram-design-engineering/plateau-view/tree/main/libs/cesium-hbao Live demo: https://plateau.takram.com
HBAO with cross-bilateral filter:
HBAO without cross-bilateral filter:
No HBAO:
Awesome demo @shotamatsuda! Would you have any interest in contributing this AO algorithm implementation?
One issue with our current implementation: it computes a non-zero obscurance on flat surfaces.
This may be related to error in the depth buffer. We are computing obscurance based on the "horizon angle" relative to the local surface normal, as reconstructed from screen position and depth buffer. On a flat surface, the horizon angle should be 90° from the normal, and obscurance should be zero. However, the reconstructed normals don't look right. Here is a rendering of the difference between the reconstructed normals at adjacent pixels.
It shows non-zero values at the edges/curves of the model, as expected. But flat surfaces also show significant noise.
CC https://github.com/CesiumGS/cesium/issues/11067?
The reconstructed normals are better if we don't use a log depth buffer.
Current result, with log depth buffer:
After forcing Scene.defaultLogDepthBuffer = false:
I hope AO will have better effects and performance. It is a good way to improve the effect of Cesium models. At present, it needs to be optimized in terms of performance and effects. Thank you for your efforts.
As discussed offline, the depth buffer for post-processing has a precision problem. To address this, we will try the following (in order):
- Use a higher precision depth buffer during the initial rendering. We currently use the
DEPTH24_STENCIL8format, corresponding to theUNSIGNED_INT_24_8data type, which is available in both WebGL2 and WebGL1 (with the WEBGL_depth_texture extension. In a WebGL2 context, we can try theDEPTH32F_STENCIL8format, corresponding to theFLOAT_32_UNSIGNED_INT_24_8_REVdata type. - Copy the depth buffer from each render pass into a floating point texture, instead of converting it to fixed precision and packing it to an RGBA texture.
- Fix the depth buffer for multi-frustum rendering (mainly relevant for linear depth buffers). We currently copy the raw result of each frustum to the same RGBA texture, without accounting for the different near and far planes that the value is referenced to. Once we switch to a floating point texture, we can project the depth value back to eye coordinates before copying, so that all frustums will be written in the same coordinate system.
- Try the EXT_clip_control extension (only working in recent versions of Chrome).
Here is an updated Sandcastle displaying the gradient of the depth buffer.
Log depth shows strong (flickering) noise in the distance, as well as some spuriously large gradients on smooth surfaces nearer the camera.
Linear depth reduces the noise nearer the camera. However, only the frustum nearest the camera is valid. The back frustum is washed out.
I updated the depth buffer to use DEPTH32F_STENCIL8 when in a WebGL2 context. Note that this doubles the size of the depth buffer--because of alignment issues, the values are stored as 64 bits per pixel.
Unfortunately, it does not have any effect on the noise.
DEPTH24_STENCIL8:
DEPTH32F_STENCIL8:
See the depth-precision branch.
@ggetz highlighted this commit, before which AO did not show a noisy background on flat surfaces. Here is a quick screenshot of AO from a prior commit (70fc40e258ce5592d7b00abd0d52cb10e90828c0):
Note the absence of any occlusion on flat surfaces, even with the "Bias" set to zero.
For comparison, here is a screenshot from the current main branch:
And here is the result using the DEPTH32F_STENCIL8 format and a 1m near plane (instead of 0.1 m), both of which reduce the noise in the depth buffer:
While some minor artifacts are reduced by the DEPTH32F_STENCIL8 format and a 1m near plane, the main problem is still there: a background noisiness on flat surfaces. This appears to be a problem introduced in https://github.com/CesiumGS/cesium/pull/9966.
I also captured the gradient of the depth buffer prior to #9966. It does not look significantly different from the depth buffer from the current main branch.
The background noise problem appears to be something more fundamental--perhaps a depth buffer is being overwritten somewhere?
The background noise is generated when the HBAO stepping is along a direction not aligned with screen X/Y.
Here is the AO output with stepping only along X/Y. This is achieved by overriding the "randomVal" texture and using a constant rotation of 0 at every sample.
Note the significant reduction in the background on flat surfaces. The remaining background is the result of the noise in the depth buffer. It is reduced by setting the near plane to 1m (instead of 0.1m):
This is already approaching the result before #9966. One possible explanation is that before #9966, we had 2 bugs canceling each other:
- The random texture was not being read correctly, leading to a rotation of 0 (or nearly 0) being used at every pixel.
- The AO was already incorrect along non-screen axes, but this bug was hidden by the random texture problem.
Then #9966 fixed the first bug, exposing the second bug. But this is speculation--actually tracing texture handling through the major refactor in #9966 would take significant time.
Here is the result using a constant rotation of czm_piOverFour:
Note the (incorrect) non-zero values on every flat surface. The current noisy result in main is a random mix of various non-zero values at different rotations.
A remarkably similar-looking bug, which appears to be related to incorrect normals. In this case, I think world-space geometry was leaking into the normal calculation. https://www.reddit.com/r/GraphicsProgramming/comments/sd17k0/trying_to_implement_hbao_shader/ The similar result was what made me suspect our normals, and therefore the depth buffer from which they were computed.
There were some incorrect geometry assumptions in our shader. For example,
//Reconstruct Normal Without Edge Removation
vec3 getNormalXEdge(vec3 posInCamera, float depthU, float depthD, float depthL, float depthR, vec2 pixelSize)
{
vec4 posInCameraUp = clipToEye(v_textureCoordinates - vec2(0.0, pixelSize.y), depthU);
vec4 posInCameraDown = clipToEye(v_textureCoordinates + vec2(0.0, pixelSize.y), depthD);
Based on what the code is doing, the comment should actually read something like this:
// Reconstruct normal with edge removal
But this line:
vec4 posInCameraUp = clipToEye(v_textureCoordinates - vec2(0.0, pixelSize.y), depthU);
appears to assume v_textureCoordinates is oriented with y = 0 at the top of the screen, when it actually starts from the bottom of the screen.
Overall, I think our current prototype AO is pretty rough. The choice of algorithm is good, it just needs a rewrite, with careful checking of the input and output of each step.
We now know the depth buffer input is somewhat problematic, but:
- Setting near plane to 1m reduces the error
- The remaining depth buffer error is relatively small (compared to other problems in the shader)
Some notes from an offline discussion: Our HBAO is generating wrong values along any sampling axis that is not aligned with screen X/Y. This matters because each pixel rotates the sampling axes by a value that is randomized across neighboring pixels.
The error along rotated axes is much larger than the depth buffer error. It is not clear whether:
- HBAO is sensitive to depth buffer error, and magnifies the problems, or
- Our implementation has a bug affecting rotated sampling axes.
We will spend a little time reworking and testing each step of the AO implementation, under the preliminary assumption that the depth buffer is not the main problem. Initial tests/QCs will include:
- Estimate the actual angular difference between adjacent reconstructed normals. (My earlier QC was arbitrarily scaled). If this is small (< 5 deg.) on flat surfaces, then the normals themselves may not be the problem.
- Compare surface gradient (via native dFdx / dFdy) to slopes computed at sample points along rotated axes. Some other implementations snap to the texture grid, to ensure the sampled depth value and computed X/Y coordinates align at each step.
- Adjust sample distance along rotated axes, to avoid resampling the same points. For a given rotation angle, I can manually adjust step size until the noise goes away. But we would need a different step size for each angle.
One possible explanation is that before https://github.com/CesiumGS/cesium/pull/9966, we had 2 bugs canceling each other
I tried to confirm this, but can't. Before that PR, the result looks fine even with a constant rotation of czm_piOverFour:
Our first test was to check the angular difference between reconstructed normals at adjacent pixels.
Here are some images showing the sine of the angle between the normal at a given screen pixel, and the normal at the pixel just above it (Y + 1 pixel). The sine value is scaled by 11.47, which will make the image fully white if the angular difference exceeds 5 degrees.
From best to worst:
-
Using linear depth with near plane at 1m
Note how the image is black on flat surfaces, with white outlining the edges of the model. This is as expected.
-
Using log depth with near plane at 1m
Some linear noise appears on the far-away terrain, but it represents angular differences of less than 5 degrees. There are also some strange artifacts near the edges of the model. But overall, the result is close enough. The depth errors are small enough that we can move ahead with further debugging of the rest of the AO code.
-
Using log depth with near plane 0.1m (current default setting)
The error between normals at adjacent pixels exceeds 5 degrees in many places. This will be a problem for AO.
I think we can move ahead for now and debug the rest of the AO process, using a near plane at 1m. We will need to come back later to revisit the errors with log depth and closer near planes.
Some additional checks just to be sure.
Angular difference between current pixel and the pixel to the right (X + 1 pixel)
Normals computed from tangents along the screen diagonal, difference along x = -y axis:
Normals computed from tangents along the screen diagonal, difference along x = y axis:
Difference between normals computed along screen axes vs. along diagonals






