SeisCLIP
SeisCLIP copied to clipboard
The code of Paper 'SeisCLIP: A seismology foundation model pre-trained by multimodal data for multipurpose seismic feature extraction'
1
University of Science and Technology of China
† Corresponding Author
‡ Project Lead
🌟 Spec-based Foundation Model Supports A Wide Range of Seismology
As shown in this figure, SeisCLIP can provide services for downstream tasks including event classification 💥 , location 🌍 , mechanism ⛰, etc.
Due to the limitations of hinet data transmission, we have not made the location and focal mechanism analysis datasets publicly available. They can be accessed through Baidu Netdisk.Links(Password:SEIS)
🌟 News
- 2024.2.2: 🌟🌟🌟 Congratulation! The paper has been published on IEEE Transactions on Geoscience and Remote Sensing (IEEE TGRS) Links.
- 2023.9.14: 🌟🌟🌟 Pretrained weight and a simple usage demo for out SeisCLIP have been released. The implementation of SeisCLIP for event classification also released. Because the location and focal mechanism analysis code need lib 'Pytorch_geometric', it may be challenging for beginners. To provide a more detailed documentation, we will release it later. (Python Version 3.9.0 is recommended)
- 2023.9.8: Paper is released at arxiv, and code will be gradually released.
- 2023.8.7: Github Repository Initialization. (copy README template from Meta-Transformer)
🔓 Model Zoo
Model | Pretraining | Spec Size | #Param | Download | 国内下载源 |
---|---|---|---|---|---|
SeisCLIP | STEAD-1M | 50 × 120 | - | ckpt | [ckpt] |
SeisCLIP | STEAD-1M | 50 × 600 | - | ckpt | [ckpt] |
Citation
If the code and paper help your research, please kindly cite:
@ARTICLE{
author={Si, Xu and Wu, Xinming and Sheng, Hanlin and Zhu, Jun and Li, Zefeng},
journal={IEEE Transactions on Geoscience and Remote Sensing},
title={SeisCLIP: A Seismology Foundation Model Pre-Trained by Multimodal Data for Multipurpose Seismic Feature Extraction},
year={2024},
volume={62},
pages={1-13},
doi={10.1109/TGRS.2024.3354456}}
License
This project is released under the MIT license.
Acknowledgement
This code is developed based on excellent open-sourced projects including CLIP, OpenCLIP, AST, MetaTransformer, ViT-Adapter, Seisbench, STEAD and PNW.