Training run for Clay v1.5
Ideas to add are
- Train a larger model version
- Use SAM as teacher model
- Add Satellogic data to reduce bias on high res training
Possibly relevant? I think the authors make a good case for focusing efforts on improving vision representations in multi modal llms to make truly flexible models in terms of accepted inputs and tasks they can address. and improving evaluation benchmarks like COCO to test more than just MaP
https://arxiv.org/pdf/2406.16860 https://twitter.com/sainingxie/status/1805862015778341123
https://x.com/osanseviero/status/1807679660328620099
^ Possible funding source for model distillation work.
Ok @brunosan could you help determine a priority list here? The other thing I'd love to see is if we can add MODIS data, as long as the architecture won't need to change
Update: MODIS has been just added. #311 Not adding Satellogic to the foundational training due to license (we understand clay would then need to carry a "cc-by satellogic" on the trained model). We are now securing compute block
Curious why CC-By is a blocker? this article indicates that the model can be used, even commercially, with attribution. https://satellogic.com/2024/05/01/satellogic-open-source-release-a-large-dataset-of-high-resolution-imagery-for-ai-model-training/
For any high res dataset sourced from a commercial provider, I expect they will at least want this kind of attribution. Having a model that understands submeter resolutions in addition to coarser resolution public imagery would be very valuable.
Curious why CC-By is a blocker?
This is for foundational training. If we train with data that requires attribution, we understand it means that the attributions carries over to the trained model, and all users of Clay need also to attribute it, which would bring higher friction. E.g. If Planet incorporates Clay on the pipeline, they might need to attribute Satellogic when using Clay.
This of course does not prevent us, or anyone, to make a finetuned version of Clay with Satellogic, or Maxar or Planet data. That version would carry the licenses of the data used.
fwiw, Clay is trained with NAIP and LINZ, which are both well under 1 meter. (32% of the 70 million chips)
PS: AFAIK it is not legally settled if the license of each training data carries over to the trained model. In LLMs the practice seems not to, but we choose to take the safer position and only use fully open data.
Update: We are conducting another model run for CLAY with the following updates:
- Added MODIS to the list of sensors. #311
- Implemented MRL on CLAY embeddings.
- Introduced SAM as the new teacher model.
- Using Fused Transformers as the Encoder/Decoder backbone.
- Switching to Fused Adam and 8-bit Adam as optimizers.
- Reduced the decoder size for MAE.
- Randomly dropping latitude, longitude, and time information.
- Randomly dropping some or all channels.
- Converting Sentinel-1 data from raw values to dB scale.
We are running several experiments with these changes, and based on the results, the successful adjustments will be included in the new model run. Keep track of the changes in dev branch.
Update: Clay v1.5 - Running a 20 node G6 g6.48xlarge cluster, or 160 L4 GPUs at 100% power. It will take ~8 hours an epoch, with the expanded corpus with MODIS. Clay v1 was 50h/epoch without MODIS. See above for changes implemented, plus a large Transformer size, of ~500 Million params, instead of ~200M.
6 epochs. No sign of plateauing.
Naip reconstruction
Hi guys, would be glad to learn any updates on this. Thanks
Still training. Will stop soon and do the embeddings run for the world #277
Very promissing lossess:
and reconstructions:
anything else in particular you would like to know @print-sid8 ?
Hi @brunosan , I've been following the Clay project and I've been using Clay v1.0 in my current project. Amazing work from the team!
I'm curious about the release of Clay v1.5, should we expect the weights to get uploaded to HuggingFace anytime soon? Thanks!
SatSummit Nov 18th. We are currently doing QA. If you can't wait for then, we'll be ready to give early access in a few days to the checkpoint. We'll post link here.
We've seen some issues we are trying to solve. Namely it seems that the MRL implementation we are using might not be as good as we hoped.
Got this, thanks for the update! Looking forward to the link
Clay v1.5 is already on HF. https://huggingface.co/made-with-clay/Clay/blob/main/clay_v1.5.ckpt Documentation is still pending and the model will be officially released net week, but feel free to use it, branch is https://github.com/Clay-foundation/model/tree/dev
Closing here as the run is done.