Significant lag in PointPrimitiveCollection.removeAll
With a large number of points, PointPrimitiveCollection.removeAll can take a few seconds to hide the points from the scene.
var viewer = new Cesium.Viewer('cesiumContainer');
var numPoints = 100000;
var pointPrimitives = viewer.scene.primitives.add(new Cesium.PointPrimitiveCollection());
var base = viewer.scene.globe.ellipsoid.radii.x;
var color = Cesium.Color.LIGHTSKYBLUE;
var outlineColor = Cesium.Color.BLACK;
for (var j = 0; j < numPoints; ++j) {
var position = new Cesium.Cartesian3(
16000000 * Math.random() - 8000000,
-(4000000 * j / numPoints + base),
2000000 * Math.random() - 1000000);
pointPrimitives.add({
pixelSize : 5,
color : color,
outlineColor : outlineColor,
outlineWidth : 0,
position : position
});
}
var handler = new Cesium.ScreenSpaceEventHandler(viewer.scene.canvas);
handler.setInputAction(function() {
pointPrimitives.removeAll();
}, Cesium.ScreenSpaceEventType.LEFT_CLICK);
Reported in the forum: https://groups.google.com/forum/?hl=en#!topic/cesium-dev/WiFG75oEkFM
I found a workaround for this. It completely solves the lag on my Win7/Chrome machine, but I'm not sure if there are ramifications for memory use or other side-effects. Take a look in Context.js at PickId.prototype.destroy. Try replacing this line:
delete this._pickObjects[this.key];
with this line:
this._pickObjects[this.key] = undefined;
For me, this runs at full speed. Any ideas what's really happening here?
removeAll iterates through each primitive and removes it, which removes the pick IDs from the cache, one at a time.
Deleting the pick ID changes the shape of the _pickObjects cache object, which I would expect to be a slow operation, but I expected this object to already be in "dictionary" mode, and expected dictionary-like performance.
The new ES6 Map object may work better, or it might still be unoptimized and end up being worse. We'd have to try it and see.
Do we need to change the shape of the _pickObjects cache object 100k times? Is there a downside to having 100k extra "undefined" keys hanging around in there, or can there be some kind of post-process to clean them up en mass?
Maybe set 100k keys to undefined and then call a helper function to clone the remaining non-undefined keys into a fresh cache object?
Currently there is no way to distinguish removing one primitive from 100k primitives. Each primitive destroys the pick IDs that it owns, one by one. If you set the value to undefined in the cache, then the data structure will grow unbounded as new pick IDs are allocated. You could track the free slots and periodically compact, of course, but then you're slowly approaching writing your own hashtable data structure, so perhaps we should just do that directly.
We definitely need to delete them because having an object with that many properties (even undefined) has an adverse affect on performance and memory, but I agree that doing it one at a time for a large number of objects is probably a bad idea. As you mentioned in your second post, some sort of mark and compact/copying would probably work well.
slowly approaching writing your own hashtable data structure, so perhaps we should just do that directly.
This is the heart of one of the main memory/performance bottlenecks in Cesium. If we could get rid of this kind of tracking and use ES6 Maps, we would be in much better shape. The problem there is that it's a breaking change to do correctly so we may have to split it up across a few releases.
So, at Scott's suggestion, I tried temporarily replacing Context.js internal use of _pickObjects = {} with = new Map() and changed all references to set/get/delete function calls. Findings:
- Performance is way better than a typical JS object's
delete, taking about 0.25 seconds to call Map.delete 100k times, compared to ~4s to do the same with deleting from an object. - Performance is still not as good as setting 100k keys to
undefined, which only drops about 2 frames of animation. Setting undefined is fast enough you can barely see the hiccup. I don't know what the cost would be to clone a large number of remaining keys. - The use of
deleteas a prototype function name inMapis terrible for jsHint, it red-flags this as an inappropriate use of a reserved keyword. I know a JS minifier on another project that has been known to choke on such uses of "reserved" words. I worked around this withthis._pickObjects['delete'](this.key);. - The compatibility table for
new Map()has some holes in it.
any news on this issue?
Also reported here: https://groups.google.com/d/msg/cesium-dev/XJqVTmdCq2o/L6h8msTYBAAJ
Issue reported here again: https://groups.google.com/forum/?hl=en#!topic/cesium-dev/bK6AfhrLwFA
I don't have hard numbers but I think suspending events before making bulk changes to an EntityCollection does help with performance. See my minor change to the linked example -- suspend, then add/remove, then resume. With the original version, removeAll() would crash my tab, but it only takes a few seconds in the fixed version.
Looking at the code, I'm not totally clear why this helps, so it probably bears some analysis before we just tell everybody to do it. Maybe somebody more familiar with that class could provide insight?
Also reported by @EdenEdk in #6534
This also seems to be a problem with BillboardCollection and PrimitiveCollection when removing a large number of billboards/primitives.
See an example with PrimitiveCollection, Reported by @abhimonk in #7230
And @kesemdavid reported seeing this with BillboardCollection in #7184
Compatibility for Map has likely improved since 2015. It could be a viable option now.
@emackey So can we do it? Some 1 wanna open a pr? I can try but im less familiar with the context.js code
@Beilinson @mzschwartz5 I didn't follow the full discussion in #12916 but did that PR resolve this issue? or is there more to go the merits keeping this open?
I would think it helped, at least, if not fully solved the issue. The original issue here isn't as bad as #12896 was, which changed an array to a map. #12916 just changed an object to a map.
We'd need to do a performance measurement to know for sure - or maybe @Beilinson already has.
@jjspace @mzschwartz5 I just got around to checking this, I think this issue is resolved: even 1 million points takes under 200ms on my machine using the sandcastle and pointPrimitives.removeAll():