3DTilesRendererJS
3DTilesRendererJS copied to clipboard
TilesRenderer: Cesium can load more data in LRUCache
Question
hi! i am upload the 3dfile to threr download: https://api.cesium.com/assets/3291928/sources/Photography.zip or download: https://drive.google.com/file/d/1CWiLmoWYtRR1UuDu4XNXRRWGikhRqTw2/view?usp=drive_link
1.look by cesiumjs, Clear
2.look by 3dtilerenderjs, Low Polygon
3.My Code, used threejs OrbitControl
Supplemental Data
No response
Library Version
0.4.8
Three.js Version
0.174
Thank you very much for providing the data set and some code! This is extremely helpful. Here's what's going on:
The LRUCache is responsible for making sure the amount of data loaded is limited to avoid overloading device graphics memory and by default the memory cap is set to a max of 429MB or 40% of a Gigabyte of GPU storage. This seemed like a good number to start with and accounts for mobile devices with lower memory caps, as well, though I'm open to discussing whether that default should be changed (for reference Cesium includes a similar cacheByes setting that defaults to ~536 MB or half a GB). Memory per loaded tile is estimated using three.js' functions to determine texture GPU-memory usage (plus 33.33% if mipmaps are used) as well as measuring the geometry vertex attribute sizes. Note that textures are often the dominating factor when it comes to memory usage for tile sets. On systems with memory these values can be tweaked to strike balance between data downloaded (or loaded from cache) and device memory usage.
Regarding your tile set - it's structured in a way that I would consider inoptimal leading to the cache to become full very quickly. Most of the textures on initial load are 1024 x 1024 or larger - up to 4K textures. These not only use a ton of memory but will also cause framerate hiccups when uploading the data to the GPU. For reference a single 1024 x 1024 pixel image will take up 5.6MB (including 33.33% for mipmaps). And a single 4K image will take up ~89.5MB on it's own - so you can see how the memory will add up quickly.
For reference the Google Photorealistic Tiles data set primarily uses 256 x 256 images per tile and splits into new child tiles for more texture detail. A 256 x 256 image is 16x smaller than a 1K image and 256x smaller than a 4K image can be uploaded to the GPU much faster.
With all that said - fixing the tile set to use smaller images per tile would be the best solution for both memory usage and framerate. If you don't want to or can't do that then increasing the LRUCache.minBytesSize and maxBytesSize parameters will allow more tile data to load into memory but note that this can result in a massive amount of memory being loaded. When I increase the memory cap I'm seeing 3GB of loaded data in the cache.
I am curious to see that Cesium is loading more data given that it has a similar memory cap system. Perhaps data is being measured or loaded differently. Would you be able to make a small example that sets up Cesium and displays your tile set in the camera view so we can compare what the differences might be?
cc @christophereast - maybe related to your issue in #1052.
quote
Thank you very much for providing the data set and some code! This is extremely helpful. Here's what's going on:
The LRUCache is responsible for making sure the amount of data loaded is limited to avoid overloading device graphics memory and by default the memory cap is set to a max of 429MB or 40% of a Gigabyte of GPU storage. This seemed like a good number to start with and accounts for mobile devices with lower memory caps, as well, though I'm open to discussing whether that default should be changed (for reference Cesium includes a similar cacheByes setting that defaults to ~536 MB or half a GB). Memory per loaded tile is estimated using three.js' functions to determine texture GPU-memory usage (plus 33.33% if mipmaps are used) as well as measuring the geometry vertex attribute sizes. Note that textures are often the dominating factor when it comes to memory usage for tile sets. On systems with memory these values can be tweaked to strike balance between data downloaded (or loaded from cache) and device memory usage.
Regarding your tile set - it's structured in a way that I would consider inoptimal leading to the cache to become full very quickly. Most of the textures on initial load are 1024 x 1024 or larger - up to 4K textures. These not only use a ton of memory but will also cause framerate hiccups when uploading the data to the GPU. For reference a single 1024 x 1024 pixel image will take up 5.6MB (including 33.33% for mipmaps). And a single 4K image will take up ~89.5MB on it's own - so you can see how the memory will add up quickly.
For reference the Google Photorealistic Tiles data set primarily uses 256 x 256 images per tile and splits into new child tiles for more texture detail. A 256 x 256 image is 16x smaller than a 1K image and 256x smaller than a 4K image can be uploaded to the GPU much faster.
With all that said - fixing the tile set to use smaller images per tile would be the best solution for both memory usage and framerate. If you don't want to or can't do that then increasing the
LRUCache.minBytesSizeandmaxBytesSizeparameters will allow more tile data to load into memory but note that this can result in a massive amount of memory being loaded. When I increase the memory cap I'm seeing 3GB of loaded data in the cache.I am curious to see that Cesium is loading more data given that it has a similar memory cap system. Perhaps data is being measured or loaded differently. Would you be able to make a small example that sets up Cesium and displays your tile set in the camera view so we can compare what the differences might be?
cc @Christophereast - maybe related to your issue in #1052.
I guess that's what you said, so I just need to adjust the lrucache parameter, right? Here's the example code for cesiumjs and the picture
I guess that's what you said, so I just need to adjust the lrucache parameter, right?
Yes but I'd consider it a bit of a hack compared to actually addressing the structure of the tile set. Again note that it can cause the a ton of data to load. And for this tile set you'll need to increase them quite a bit. You can tweak them as needed but this worked for me:
tiles.lruCache.minBytesSize *= 10;
tiles.lruCache.maxBytesSize *= 10;
Here's the example code for cesiumjs and the picture
Thanks - when I have time I will try to take a look at this in more detail to see what the differences are.
Also - how was this tile set generated? I see "osgb2tiles" in the tile set header but searching that tool name doesn't show any results.
OSGB2Tiles is a conversion tool that has limited support for texture optimization or model lightweighting, and is generally used for development and debugging. Thank you for your answer. The problem has been resolved and can help more people!😄🙂
OSGB2Tiles is a conversion tool that has limited support for texture optimization or model lightweighting
There may be other better tools available, then, or perhaps you can request some changes. Is there a website available with information about the tool?
Thank you for your answer. The problem has been resolved and can help more people!😄🙂
Of course! I'm going to keep this open until I can investigate some of the Cesium differences, though.
It looks like Cesium defaults to a cache of ~500MB to 1GB wheres this project initializes to ~300MB to ~400MB. This explains part of the discrepancy. Even when setting the caches ranges to match Cesium's, though, it seems Cesium is loading at least one LoD higher than this project. This could be due to a number of factors, though:
- Differences in tile load order / priority
- Differences in error calculation (inside and outside of the frustum)
- Differences load / hierarchy traversal strategies
But I'll have to look into this more in depth in the future to understand exactly where these differences are coming from and whether they should be considered bugs or not. It's still the case, though, that the tile set provided is structured inoptimally and should be regenerated to use more appropriately sized textures.
It looks like Cesium defaults to a cache of ~500MB to 1GB wheres this project initializes to ~300MB to ~400MB. This explains part of the discrepancy. Even when setting the caches ranges to match Cesium's, though, it seems Cesium is loading at least one LoD higher than this project. This could be due to a number of factors, though:
- Differences in tile load order / priority
- Differences in error calculation (inside and outside of the frustum)
- Differences load / hierarchy traversal strategies
But I'll have to look into this more in depth in the future to understand exactly where these differences are coming from and whether they should be considered bugs or not. It's still the case, though, that the tile set provided is structured inoptimally and should be regenerated to use more appropriately sized textures.
I noticed that the tile load order / priority of this project is different from Cesium, this project gives the __depthFromRenderedParent the highest priority, so I guess when the LRUCache is full, all the tiles will stay the same level of details. Cesium take several factors into consideration, including screenspaceerror, distance of the tile to the center ray of the camera, in this case, when the cache is full, the tiles that closer to camera(resulting in higher screen space error) and at the center of the screen will stay higher level of details than others.
It looks like Cesium defaults to a cache of ~500MB to 1GB wheres this project initializes to ~300MB to ~400MB. This explains part of the discrepancy. Even when setting the caches ranges to match Cesium's, though, it seems Cesium is loading at least one LoD higher than this project. This could be due to a number of factors, though:
- Differences in tile load order / priority
- Differences in error calculation (inside and outside of the frustum)
- Differences load / hierarchy traversal strategies
But I'll have to look into this more in depth in the future to understand exactly where these differences are coming from and whether they should be considered bugs or not. It's still the case, though, that the tile set provided is structured inoptimally and should be regenerated to use more appropriately sized textures.
I noticed that the tile load order / priority of this project is different from Cesium, this project gives the
__depthFromRenderedParentthe highest priority, so I guess when the LRUCache is full, all the tiles will stay the same level of details. Cesium take several factors into consideration, includingscreenspaceerror,distance of the tile to the center ray of the camera, in this case, when the cache is full, the tiles that closer to camera(resulting in higher screen space error) and at the center of the screen will stay higher level of details than others.
would it be possible to use lruCache.unloadPriorityCallback to implement a "cesium like" calculated priority?
It looks like Cesium defaults to a cache of ~500MB to 1GB wheres this project initializes to ~300MB to ~400MB. This explains part of the discrepancy. Even when setting the caches ranges to match Cesium's, though, it seems Cesium is loading at least one LoD higher than this project. This could be due to a number of factors, though:
- Differences in tile load order / priority
- Differences in error calculation (inside and outside of the frustum)
- Differences load / hierarchy traversal strategies
But I'll have to look into this more in depth in the future to understand exactly where these differences are coming from and whether they should be considered bugs or not. It's still the case, though, that the tile set provided is structured inoptimally and should be regenerated to use more appropriately sized textures.
I noticed that the tile load order / priority of this project is different from Cesium, this project gives the
__depthFromRenderedParentthe highest priority, so I guess when the LRUCache is full, all the tiles will stay the same level of details. Cesium take several factors into consideration, includingscreenspaceerror,distance of the tile to the center ray of the camera, in this case, when the cache is full, the tiles that closer to camera(resulting in higher screen space error) and at the center of the screen will stay higher level of details than others.would it be possible to use
lruCache.unloadPriorityCallbackto implement a "cesium like" calculated priority?
Yes, but you need something like
tilesetRenderer.downloadQueue.priorityCallback = priorityCallback;
tilesetRenderer.parseQueue.priorityCallback = priorityCallback;
lruCache.unloadPriorityCallback = unloadPriorityCallback;
I'm happy to rethink the priority function - removing the depth comparison would likely be an improvement - but the primary issue here is that more tiles seem to be loaded vs Cesium even when the cache size has been increased, which is unrelated to the priority function. If someone could put together an A / B comparison scenes with Cesium and 3DTilesRenderer using the tile set in this issue and positions the camera at the same angle with the same camera parameters, that would be the best way to help. Then it will be easier to dig in and understand where the calculations are meaningfully different.
I've created a demo here to compare the tiles loaded between cesium and 3DTilesRenderer. With a basic tile set (such as the default tile set at the link) you can see that the loaded tiles and memory usage are the same (both systems are set to evict as much memory as possible and load as much data as is needed to display the tile set). It looks like cesium may be deferring tile requests or refinement while the camera is moving too much to avoid loading tiles that aren't needed, which may be a good idea to add here but it can result in "popping" when moving the camera around.
Regarding the specific tile set from this issue - it's structured such there are 14 immediate sub tree json files that then contain content and how this case is handled are where Cesium and 3DTilesRenderer seem to differ. This project will load the full first layer of content-ful tiles regardless of the their initial depth to ensure there is no pop-in when moving the camera. It will also wait until the full first layer of tiles are loaded before rendering to avoid the appearance of data "trickling in". Cesium does neither of these which can give the impression that it's loading faster but it also suffers from pop in when moving the camera around (see Cesium tiles popping in on the right vs no pop in with 3d tiles renderer on the left):
https://github.com/user-attachments/assets/d9879e5f-d29a-4720-ac46-aaddbb4e1d59
This pop in behavior will also affect tile sets like the Google Photorealistic Tiles data set where the root is composed of a series of empty tiles that will be culled by the frustum checking during traversal before contentful tiles are finally reached. This can be particularly distracting with fading tiles where tiles at these boundaries can fade in from nothing.
This difference in behavior is particularly noticeable with this tile set because of how extremely large each tile is (using multiply 2K+ textures, over 75MB of texture data for a single tiles content) meaning they will take a long time to load and quickly fill up the cache which is generally smaller in this project than in Cesium by default.
I'm generally trying to keep the number of flags and options in this project down so it's understandable but I'm happy to discuss what a new TilesRenderer option might be called to toggle between these behaviors. But bear in mind this only impacts tile sets with the specific structure outline above.
I noticed that the tile load order / priority of this project is different from Cesium, this project gives the __depthFromRenderedParent the highest priority, so I guess when the LRUCache is full, all the tiles will stay the same level of details
The priority function has been adjusted in #1228 to remove prioritizing by depth first. This has had an adverse effect on the cache where children can be loaded and added to the cache first, potentially prevent parent tiles from being added if the tile set is at the edge of what can be loaded (see #1230) which will have to be addressed at some point.
I've updated the load behavior in #1241 to fix a couple traversal-related issues that ultimately reduce the number of tiles displayed in the Google Tiles case by a little bit but nothing significant (~ 6 tiles out of 2-300) and will provide a more consistent handling of "empty tiles" during traversal.
Regarding this issue of loading all the root tiles in this case vs just loading those in the frustum, I'm reminded of this issue relating to Cesium's handling of root tile children where this behavior is discussed in a bit more detail and it's acknowledged that Cesium should probably be loading these tiles by default, too. My feeling is that generally loading all children of a "REPLACE" refinement node is the "right" thing to do to avoid this kind of pop-in and is consistent with how refinement of "REPLACE" tiles is handled elsewhere in the hierarchy.
So coming back to what to do - the fundamental issue here is that the tiles in provided data set are massive and take up tons of memory, meaning they take a long time to load and fill the cache up quickly, preventing other tiles from being able to load. If the tile set were more well formed then loading these extra tiles at the root would be fairly minor and not result in lower quality visuals.
I think for now I'm going to say that the data set should be regenerated to not have these issues in the interest of avoiding adding unnecessary options to the TilesRenderer class. If other people have different feelings on this or there's a data set that this behavior is somehow causing an issue for and absolutely cannot be regenerated then I'm open to being convinced it needs to be added.
I'll close this for now until there are other requests to match this behavior in Cesium specifically.