ml4floods icon indicating copy to clipboard operation
ml4floods copied to clipboard

[Models] Consider prediction at different pyramid levels

Open gonzmg88 opened this issue 4 years ago • 1 comments

Two options to choose:

  1. Batch ingested predictions: Save predictions as e.g. COG GeoTIFF (so that we'll have predictions at all levels of the pyramid).
  2. Live inference: Doing inference within live visualization server. This uses as input the image at the current queried pyramid layer.

gonzmg88 avatar Feb 16 '21 16:02 gonzmg88

So I think, if our pipeline right now uses batched ingestion of raw S1/S2/whatever data then batched inference makes sense, and save it out as COG in a bucket so we can read little chunks in a leaflet map client. This is probably a 'safe' goal.

If we're trying to run inference on a new area, then it would be cool if it was all streaming -> stream new S1/S2 data, run inference, and serve up little chunks all in the same data steam. This is probably at least a 'stretch' goal.

Maybe there's a middle ground where we can create a UI where the user selects a new area, which is automatically fetched, inferred, and stored to a bucket of COG. Maybe this is a 'stretch' and the formed is a 'bold and crazy'?

Lkruitwagen avatar Feb 17 '21 10:02 Lkruitwagen