xeokit-sdk icon indicating copy to clipboard operation
xeokit-sdk copied to clipboard

Improve RTC coordinates precision

Open xeolabs opened this issue 2 years ago • 0 comments

Background

Models positioned far from the coordinate origin often have vertex coordinates that have a high value and therefore require double-precision to view accurately.

GPUs only support single-precision, however, which results in jittering.

To work around this, we emulate double-precision coordinates by partitioning the coordinates into 3D rectilinear tiles, and make each coordinate's values relative to its tile's origin. This enables us to store them more efficiently, as single-precision values, and to use special math and shader tricks to view them as if they were double-precision.

Accuracy

The accuracy of this scheme depends on selecting an appropriate tile size for the magnitude of the coordinates we wish to place into it (ie. partitioning). For small coordinates, we can choose a large tile size, since the density of available accurate floating-point values (IEEE 754) is greater at the lower end of the floating point range. As coordinate magnitudes increase, however, we must choose an increasingly smaller tile size, since the available accurate values become sparser at the higher end of the floating point range.

Tasks

For a given set of coordinates, find a heuristic that automatically computes the optimal tile size, that would best preserve their accuracy when they are converted to RTC.

As an interim step until we have such a robust heuristic, we'll start by setting xeokit's default tile size to 1000, which works nicely with some of our super-distant roadworks models.

The more tiles we have, the slower the viewer, however, since each tile represents at least one WebGL draw call, so the heuristic is more desirable, since it would create the minimal number of tiles.

  • [x] Set default RTC tile size to 1000
  • [ ] Automatically compute optimal tile size for given coordinates

xeolabs avatar Jul 16 '22 09:07 xeolabs