ml-enabler
ml-enabler copied to clipboard
Set thresholds for fetching large areas
Currently https://github.com/hotosm/ml-enabler/blob/master/ml_enabler/api/ml.py#L411, we're not checking the total number of tiles within in the bbox to see if that's acceptable for performance.
We should set this based on the prediction zoom level.
cc @batpad
Marking this to do after we've a clear sense of performance and load on the API. At the moment, if predictions
are stored at z18, the API will allow requests at z18 or aggregate it any lower zoom levels. This is the best case scenario for clients in case they want to fetch raw predictions.
As we add more data, we'll get a sense of what restrictions make sense -- these restrictions could be at tile level, or at model / prediction level.