ml4floods
ml4floods copied to clipboard
[Models] Consider prediction at different pyramid levels
Two options to choose:
- Batch ingested predictions: Save predictions as e.g. COG GeoTIFF (so that we'll have predictions at all levels of the pyramid).
- Live inference: Doing inference within live visualization server. This uses as input the image at the current queried pyramid layer.
So I think, if our pipeline right now uses batched ingestion of raw S1/S2/whatever data then batched inference makes sense, and save it out as COG in a bucket so we can read little chunks in a leaflet map client. This is probably a 'safe' goal.
If we're trying to run inference on a new area, then it would be cool if it was all streaming -> stream new S1/S2 data, run inference, and serve up little chunks all in the same data steam. This is probably at least a 'stretch' goal.
Maybe there's a middle ground where we can create a UI where the user selects a new area, which is automatically fetched, inferred, and stored to a bucket of COG. Maybe this is a 'stretch' and the formed is a 'bold and crazy'?