Bruno Sánchez-Andrade Nuño
Bruno Sánchez-Andrade Nuño
Closes #138
This competition to do land cover classification seems a great opportunity to test our model openly. It requiers ingestion on non-Sentinel data only in RGB, so we are not there...
After #117 has closed, I think we should update the documentation in two places, which I think are both small changes: - [ ] Add a new model benchmark section,...
Tagging here a possible finetune application of Clay, or downstream benchmark. Microsoft AI for Earth provided funding for [this project](https://essd.copernicus.org/articles/15/3991/2023/essd-15-3991-2023.html) to use a Unet on Sentinel-1 and Sentinel-2, trained on...
This PR adds a sample notebook to explore the embeddings space using openTSNE locally. Depending on your compute resources it can scale up to the full training set of v0....
We should ensure datacubes only have one EPSG. Otherwise we [raise error](https://github.com/Clay-foundation/model/blob/0145e55bcf6bd3e9b19f5c07819a1398b6a22c35/src/model_clay.py#L898): ```python File "/home/brunosan/code/Clay/model/src/model_clay.py", line 920, in predict_step raise NotImplementedError( NotImplementedError: More than 1 EPSG code detected: {32636, 32637}...
For the linear probes, we are using c2smsfloods (#75). I'd like us to consider also other Sentinel-derived products: - [ESA's WorldCereal](https://esa-worldcereal.org/en/about/worldcereal-products) - [METER-ML](https://stanfordmlgroup.github.io/projects/meter-ml/) - Ctrees ( @fmacchiavello to clarify here...
This is a great overview [applying SAM for Geospatial](https://www.sciencedirect.com/science/article/pii/S1569843223003643#sec4). It would be a great benchmark to compare our segmentation to SAM's, specially since SAM seems to fail with resolutions below...
This is low priority for v0, but I think it is worth assessing level of effort. Would it make sense to make all inputs optional? I understand we currently provide...
We are currently using MAE where we recreate the input image. We can specify any instrument, but we always train with the task to reconstruct the input image. This means...