WebWorldWind icon indicating copy to clipboard operation
WebWorldWind copied to clipboard

Research WebGL based surface shapes

Open eirizarry opened this issue 6 years ago • 12 comments

Child of #561

This effort investigated alternative techniques to draw surface shapes, lines, and images in WebWorldWind. The attached document details the results of the investigation and the comments below summarize some of the performance and other observations of the effort.

Alternative Surface Shapes.pdf

Additionally, research literature is attached in a zip archive.

Shadow Volume Resources.zip

eirizarry avatar Apr 02 '18 21:04 eirizarry

Preliminary Shadow Volume Performance Testing With help from @pdavidc yesterday, I have a reliable shadow volume technique identified. I’ve used this technique to implement a test surface shape, SurfaceCircleSV. SurfaceCircleSV uses an identical API to the current SurfaceCircle, and more importantly, it uses the same boundary generation. Instead of rendering to texture, it uses shadow volumes. Using the functional test for surface shapes I made the other week I am able to get an early comparison of how shadow volume rendering compares with render to texture.

I ran two tests: the “static shape” (1000 non-moving surface circles) and the “dynamic shape” case (500 circles moved every frame)

On my machine, running the I see the following average and standard deviation of the frame time for an automated navigation:

Stats updated on 11 June 2018 with data from interior and outline drawing


iMac 2.93 GHz Intel Core i7 - ATI Radeon HD 5750 Static Case

Test Case and Commit Hash Stencil Render to Texture
Average - Interior Only - d431f15 3.8ms 11-13ms
Average - Interior and Outline - 268dc4f 7-9ms 15-20ms

Dynamic Case

Test Case and Commit Hash Shadow Volume Render to Texture
Average - Interior Only - d431f15 28.7ms 163-169ms
Average - Interior and Outline - 268dc4f 33-37ms 169-172ms


Nexus 9 - Android 7.1.1 - Chrome Static Case

Test Case and Commit Hash Stencil Render to Texture
Average - Interior Only - d431f15 15-25ms 38-43ms
Average - Interior and Outline - 268dc4f 19-30ms 50-57ms

Dynamic Case

Test Case and Commit Hash Stencil Render to Texture
Average - Interior Only - d431f15 80-83ms crashed
Average - Interior and Outline - 268dc4f 88-93ms crashed


Nexus 5x - Android 8.1.0 - Chrome Static Case

Shadow Volume Render to Texture
Average - Interior Only - d431f15 16-19ms 31-33ms
Average - Interior and Outline - 268dc4f 56-61ms 42-90ms

Dynamic Case

Shadow Volume Render to Texture
Average - Interior Only - d431f15 88-122ms crashed
Average - Interior and Outline - 268dc4f 93-218ms^ crashed

^ The Nexus 5x demonstrated progressively worse frame times after reloading the page. The fifth run of the test crashed the browser.


So there seems to be a significant frame time advantage with shadow volume rendering pipeline vs the current render to texture with state tracking when shape properties are changing.

Some Notes:

  • The SurfaceCircleSV (shadow volume technique) currently does not support an outline yet
  • The shadow volume technique is using an RTC coordinate space to avoid jitter

Future Work:

  • While frame times are looking good, I’ll be evaluating memory usage as well. The shadow volume techniques utilize gpu memory far more extensively than our current render to texture approach
  • The implementation of lines will likely add significant memory use and additional draw calls. I’m still working on a solid shadow volume line implementation

If you have any questions, let me know.

zglueck avatar May 30 '18 20:05 zglueck

@zglueck These numbers are very encouraging. This implementation of shadow volumes has two issues that might be hampering these baseline performance numbers:

  1. Every instance of SurfaceCircleSV creates its own WebGL program. That compilation and linking time is included in these results. I've pushed a change to the test branch that corrects this.
  2. Each surface circle volume requires six draw calls. This could have a big impact on lower performing systems. Why not compile all primitive elements into a single triangle strip?

Additionally, we need to know what machine these performance numbers are reported on, and we need to see numbers from both a desktop and a mobile device.

pdavidc avatar May 31 '18 00:05 pdavidc

@pdavidc thanks for the notes. I completely missed the unique program for each shape, thanks for the fix. I took some time change the three draw calls down to one. I'm updating the performance table above with the new numbers:


iMac 2.93 GHz Intel Core i7 - ATI Radeon HD 5750 - d431f15 Static Case

Shadow Volume Render to Texture
Average 3.8ms 11-13ms
Standard Deviation 2.6ms 12-14ms

Dynamic Case

Shadow Volume Render to Texture
Average 28.7ms 163-169ms
Standard Deviation 10-12ms 26-30ms

The two changes did wonders for the standard deviation for both cases. The static average improved tremendously.

I'll expand testing to mobile devices before continuing on the outline/line shadow volume.

zglueck avatar May 31 '18 04:05 zglueck

Thanks, @zglueck. The latest performance metrics are very encouraging. Let's hope a sample mobile device produces similar results.

pdavidc avatar May 31 '18 16:05 pdavidc

I tested commit d431f15 on two Android mobile devices. I conducted each test five times, the span of the results are documented below:


Nexus 9 - Android 7.1.1 - Chrome - d431f15 Static Case

Shadow Volume Render to Texture
Average 15-25ms 38-43ms
Standard Deviation 11-21ms 37-43ms

Dynamic Case

Shadow Volume Render to Texture
Average 80-83ms crashed
Standard Deviation 30-36ms crashed


Nexus 5x - Android 8.1.0 - Chrome - d431f15 Static Case

Shadow Volume Render to Texture
Average 16-19ms 31-33ms
Standard Deviation 12-20ms 36-37ms

Dynamic Case

Shadow Volume Render to Texture
Average 88-122ms crashed
Standard Deviation 35-47ms crashed

zglueck avatar Jun 01 '18 17:06 zglueck

Commit d431f15 only includes interior coloring of surface circles drawn by the shadow volume method. The render to texture technique provides an outline capability. The outline may be a user selected color/transparency and width in pixels.

While an outline is a simple concept, correctly implementing the effect with the shadow volume technique requires a narrow volume, turning a single vertex point defining a location on a line into four vertices and requiring spatial data to draw the volume. See the image below demonstrating the outline of the shadow volume (red lines) and the resultant surface line: screen shot 2018-06-01 at 1 36 45 pm

Generating the volume to represent the line isn't a difficult task, but as the line increases in complexity or includes many vertices (like the smooth outline of a surface circle) the computational load can start to make an effect on frame time if calculated on the CPU. Initial shadow volume performance with dynamic shapes (shapes whose positions or sizes change) shows a tremendous advantage over the render to texture method. Diluting that advantage by requiring significant geometry regeneration is something that can be avoided by utilizing the vertex shader to create the volume.

Many shadow volume implementations utilize a geometry shader to generate the vertices for the the shadow volume from a simple position vertex buffer, but, geometry shaders aren't available in WebGL 1. So we've opted to use a "poor mans geometry shader" by duplicating the vertices in the vertex buffer then transforming them in the vertex shader. Further, we're referencing the previous and next vertex positions to generate a normal vector, providing a direction to transform the vertices. By reusing the vertices stored in the vertex buffer for drawing the interior of the shape, we only have to use the CPU to generate the cartesian position from the geographic location twice each vertex (top and bottom of the volume).

Here is what the shadow volume for the interior draw looks like: screen shot 2018-06-01 at 2 49 37 pm

And once the stencil has been enabled and the z-fail test applied: screen shot 2018-06-01 at 2 52 45 pm

The z-fail test basically says, draw the shadow volume geometry, if you shot a ray through every pixel on the geometry and count +1 for every time you go through a front face and -1 for everytime you go through a back face, any pixels with a non zero value are in the shadow (or on the terrain in our case).

z-fail has the advantage that it still works when the near clip plane intersects the shadow volume. Further @pdavidc figured out a clever trick to prevent the z-fail's typical short coming of far plan clipping which allows the shape to show through the earth. By clamping to the w value in clip space, we can prevent the far plan from clipping, e.g:

gl_Position.z = min(gl_Position.z, gl_Position.w);

For the outline, we use the vertex shader to move the interior volumes vertices to form a narrow wall around the outline: screen shot 2018-06-01 at 3 06 41 pm

Once the stencil is enabled, we get: screen shot 2018-06-01 at 3 05 01 pm

Checkout ee07629 for a full working version of the shadow volume surface circle implementation.

Some Notes:

Pros:

  • Terrain is perfectly painted by the shape and there are no level of detail transitions
  • The interior and outline use the same shaders and buffers!
  • Its fast, see the initial benchmarks above, and simple (look through the state tracking of the current surface shapes)
  • Antemeridian, poles? Who cares! By using a cartesian system all geographic discontinuities are not an issue

Cons:

  • Use of the cartesian system at large scales requires a Relative to Center (RTC) or Relative to Eye (RTE) to avoid jitter. The RTC coordinate system will exhibit jitter with larger shapes (>130km???).
  • The outline is currently geographically defined, e.g, its 1000 meters wide instead of our current surface shapes using pixels

Future Work:

  • Define the outline in screen space instead of geographically
  • RTE memory evaluation (RTE uses double precision in the shader using DSFUN90 algorithms, which means two floats for every vertex, and remember, a single Location on the surface turns into four vertices when doing an outline)

zglueck avatar Jun 01 '18 20:06 zglueck

Fantastic work @zglueck

pdavidc avatar Jun 01 '18 20:06 pdavidc

Updated Performance Numbers with Outlines

I tested commit 268dc4f on an iMac and two Android mobile devices. Commit 268dc4f includes outline colors on some shapes. I conducted each test five times, the span of the results are documented below:


iMac 2.93 GHz Intel Core i7 - ATI Radeon HD 5750 - 268dc4f Static Case

Stencil Render to Texture
Average 7-9ms 15-20ms
Standard Deviation 10-12ms 17-23ms

Dynamic Case

Stencil Render to Texture
Average 33-37ms 169-172ms
Standard Deviation 14-19ms 31-33ms


Nexus 9 - Android 7.1.1 - Chrome - 268dc4f Static Case

Stencil Render to Texture
Average 19-30ms 50-57ms
Standard Deviation 14-24ms 46-49ms

Dynamic Case

Stencil Render to Texture
Average 88-93ms crashed
Standard Deviation 40-45ms crashed


Nexus 5x - Android 8.1.0 - Chrome - 268dc4f Static Case

Stencil Render to Texture
Average 56-61ms 42-90ms
Standard Deviation 103-112ms 54-121ms

Dynamic Case

Stencil Render to Texture
Average 93-218ms* crashed
Standard Deviation 48-103ms* crashed

  • The Nexus 5x demonstrated progressively worse frame times after reloading the page. The fifth run of the test crashed the browser.

Adding outlines of shapes decreased performance for both the Stencil and Render to Canvas techniques. In nearly every case, the stencil technique performed better. The Nexus 5x findings with dynamic shapes and the stencil could use further evaluation with Chrome development tools.

zglueck avatar Jun 11 '18 13:06 zglueck

Outline Stippling Investigation

I've added an initial outline stippling capability in ec2af7f. The implementation is based the approach used in WWA, specifically with how texture coordinates are based on distance between points and a small texture with transparent regions is used for the stipple pattern.

screen shot 2018-06-12 at 11 59 11 am

The major source of experimentation at this point has been what geometry the stipple textures should be applied. The screen shot above reverses the winding of the outline stencil (shadow volume) and widens the volume slightly. It appears nicely at some aspects, and terribly in others: screen shot 2018-06-12 at 12 04 48 pm just moving a little more gets you: screen shot 2018-06-12 at 12 04 58 pm

The transition between viewing the side volume textures and bottom appears to be the main issue. My next effort will focus on only applying the textures to the bottom part of the shadow volume and increasing their width (so the stippling doesn't disappear of the bottom plane does not intersect the surface intersection).

Update: Technique 2 - Texture mapped to bottom plane of shadow volume

This stippling technique is similar to the previous one in two ways: 1. reuses the distance between points to map texture coordinates 2. uses a simple texture for the stipple pattern. Instead of mapping the textures to the complete outline volume, this technique only maps to the bottom plane, but, extends the bottom plane a multiple over the volume providing overlap for high tilt views.

Here is the result: screen shot 2018-06-12 at 2 20 26 pm And with some diagnostic debug visualization: screen shot 2018-06-12 at 2 20 39 pm As you can see, the stipple texture is actually much wider than the outline volume in an attempt to provide stipple coverage for all viewing angles. Note that the outline in the distance completely misses stippling due to the bottom texture not intersecting the volume.

zglueck avatar Jun 12 '18 17:06 zglueck

Using Orthographic Projection for Surface Images

@pdavidc suggested a brilliant alternative to rendering images on terrain. Instead of mapping texture coordinates from latitude and longitude to an image, we could use an orthographic projection of the image and terrain for mapping texture coordinates. The most immediate benefit is the elimination of special handling of geographic discontinuities like the antimeridian and the poles due to the change to cartesian coordinates during the transform.

Brief Description of the Technique

Right now, when we draw to the terrain or surface of the earth, we require equirectangular tiles and images which we then draw on with the simply transformed geographic coordinates (e.g. 22.75 degrees east turns into a percent of the parent tile). The orthographic approach projects the image onto the globe, but then uses the same transformation process to determine the terrains coordinates in texture space. The additional projection (terrain coordinates to texture coordinates) can occur inside the vertex shader which has an additional benefit of not requiring a priori determined texture coordinates.

Demonstration

I created a simplistic implementation using a hemisphere with rudimentary controls to better understand the necessary process and requirements. You can see it in the test/shadow-volume-performance branch or JSFiddle.

The sample allows you to control your view of the hemisphere and the positioning of the projected image. Note, as the image moves up to and over the pole the image does not distort and no special calculations were required.

The sample uses a similar tessellation approach to WorldWind. The spacing is not the same, but the density of vertices as you approach the poles is similar.

Observations

Initial findings using the simple sample linked above.

Pros:

  • No special handling of poles or antimeridians which greatly simplifies the implementation
  • The image level of detail is independent from the terrain resulting in continuous looking surface images
  • Texture coordinate determination is completed in the vertex shader eliminating a costly cpu side computation and more importantly saves gpu buffer storage

Cons:

  • There is distortion in the texture coordinates due to the curvature of the earth. The further from the center of the image, the more distortion. The larger the image relative to the globe size, the more distortion. I believe there is a potential correction for the distortion (see below).
  • The sample is a hemisphere, without terrain. I believe projecting vertices with terrain may cause unusual distortions which would be non-trivial to correct. However, @pdavidc demonstrated a decoupled altitude from terrain coordinates, which would allow projecting of the spherical geographic coordinates to determine appropriate texture coordinates before applying terrain (see, he’s already like four steps ahead here ;))

Distortion Correction for Texture Coordinates

As noted above, due to the curvature of the sphere, the texture coordinates are distorted as you move away from the center of the image. Using some assumptions of the sphere, we could provide a correction to more accurately map the surface image to the globe. Specifically, we need to correct the orthogonal image position with the curvature of the earth. I apologize for the quality of the following diagram, when I get a chance, I’ll scan a better version and upload. screen shot 2018-06-13 at 11 30 09 pm

Basically, the distortion error is z - rho. We can calculate and correct the difference and apply it to the texture coordinates to reduce image distortion. If we utilize the assumption the globe is a sphere during this process, we could even precompute the correction values and embed them into a texture which would prevent the need to calculate the correction within the shader.

Summary

Using an orthographic projection of an image to determine texture coordinates has a number of benefits: Geographic discontinuities do not require special handling, the image resolution/appearance is independent of the terrain level of detail, and the implementation is relatively simple.

zglueck avatar Jun 14 '18 16:06 zglueck

Surface Images Using Orthographic Projection Transforms

Background

The previous comment discussed positive initial findings of using an orthographic transform for displaying images instead of an equirectangular transform. Better image quality over "the poles" and a simple implementation approach highlighted the primary benefit of the orthographic projections elimination of special cases at geographic discontinuities (poles and antemeridian) and better image quality.

The orthographic projection technique is a two step process. First, the shape coordinates and their accompanying texture coordinates (what part of the desired image should attach to the shape points) must be transformed into a new texture. The second part is the sampling of that new texture to "paint" the surface image onto terrain.

This comment will focus on the first part, transforming surface shapes and their texture coordinates to a new projected texture.

Setup

To demonstrate how a shape could skew an image, a right trapezoid shape was used to map a square image. Four points and texture coordinates are defined. The orthographic projection is centered at the middle of the shape.

The image below shows the geographic positioning of the future surface image in transparent green with a black outline. The red, green, blue, and white image on the bottom of the page is the source image that will be transformed to the geographic position using the orthographic projection technique. The black background inset image is the projection of the geographic shape to the orthographic projection with the texture coordinates of the red, green, blue, and white image stitched on. screen shot 2018-06-27 at 3 53 34 pm

The inset black background image is the culmination of step one and represents the orthographic projection of the shape and its image.

Next Steps

This texture/image may now be sampled by the second step of the process in order to apply the image to terrain.

Notes

  • The "resolution" of the projected image can be controlled by the extent of the orthographic projection
  • More investigation is need to understand the impact of the near and far planes of the orthographic projection

zglueck avatar Jun 27 '18 21:06 zglueck

Is stippling going to be implemented for surface shapes anytime soon?

lockieRichter avatar Sep 07 '18 05:09 lockieRichter