Thank you for your support. @msbarry
would like to know why the first point of the feature is used. Based on my understanding, at the same zoom level, the first point of a feature should only be present in one tile. However, after executing sliceIntoTiles, the feature is split into multiple tiles. Why isn't the number of points limited by taking the vertices of each smaller feature after the split?
We want to show/hide an entire feature, not individual parts of a feature that may appear on different tiles otherwise it could appear broken. It could either use the centroid for the whole feature, the first point, an interior point, etc. to compute the label grid coordinate. In practice this should only get used on very small features so it probably shouldn't make much of a difference although I could see switching to centroid as well.
Can you try this out and see how it works for your dataset?
Thank you very much for your patient guidance.
I noticed that FeatureGroup.TileFeatures.add applies certain restrictions to features, but I couldn’t locate the logic that verifies if the current tile information matches the group information.For example, if a large feature is split across (1,0,0) and (1,1,0), is the group calculation performed twice within the same tile grid, or is it handled separately for each tile?
If you could point me to the specific part of the code where this is managed, that would be really helpful. I’m planning to carefully review the workflow in this section.
Yes exactly, the features get grouped into tiles then when reading all the features for a tile, FeatureGroup.TileFeatures.add starts ignoring features in the current tile once a group ID (in that tile) has too many features in it. Since a polygon's center or first point always falls into one tile it doesn't matter if the entire large feature is split across many tiles. The results might look strange though, so in practice you'd only want to use this on small polygons/lines.
In the case where the tile size is not evenly divisible by the group size, you need to take care to make the tile buffer large enough that the same set of features show up in each group when that group is duplicated between adjacent tiles. But for 16, 32, 64, 128 that's not a concern.
Ahh actually now that I think more about it, this could be problematic since if part of a large feature appears in this tile then the center of it might be far away, and there could be lots of other small features around the center point that would cause it to get hidden on the tile that contains all of those features, but this tile doesn't have visibility into them.
The right way to do this would be to turn it into a 2-step process: emit all raw features along with their group ID, sort/group/filter by the group ID, then continue processing all of those features. There are other operations that could benefit from this "global grouping" step so might make sense to happen along with them.
But for very small features (smaller than the tile buffer) I think that the limited implementation here could still work?
I have a dataset of over 300 million land-use polygons, the data is about 200g, and after testing, the results seem a bit strange. There are noticeable connection boundaries between tiles, and at certain zoom levels, you can clearly see the four borders of each tile. After further testing, I've realized that even if this functionality is implemented, it doesn't meet my current needs. Please forgive me for initially thinking in the wrong direction. I thought there would be a large boundary feature encompassing all the data, so even if some small features didn't hit, the boundary would serve as the background. However, in reality, there isn't such a large boundary feature, but rather many small polygons. When features are limited, it leads to many small areas being left blank.
What I actually need is to achieve a display where detailed polygon outlines are shown only at higher zoom levels, such as 11-14.
However, at lower zoom levels (0-10), the entire area should still be covered, achieving a similar effect to the following:
To ensure full coverage while keeping the tile size within approximately 1MB, my idea is to pixelate the polygons before executing encodeAndEmitFeature. For example, with a tile resolution of 256x256, I would calculate how many pixels each geom to be output occupies, and then replace the original feature with small squares for each pixel. The goal is to minimize the size of each feature while ensuring that the area is fully covered. Could you help me assess whether this approach is feasible? I really appreciate your help!