Evaluation
To conduct evaluations or training, checkpoints are currently required. However, it would be beneficial if evaluation results could be reproduced using the model available on HuggingFace: answerdotai/ModernBERT-base.
Additionally, it would be handy to initiate training or fine-tuning from this model on HuggingFace. Could you please advise on the steps necessary to achieve this? I am willing to assist in any way possible.
Sorry for the late response, The code is compatible with HF model implementations, so you can start from any HF BERT model, you can achieve that by defining the model like this in the config:
model:
name: hf_bert
use_pretrained: true
pretrained_model_name: answerdotai/ModernBERT-base
tokenizer_name: answerdotai/ModernBERT-base
Edit: would also need to clear the defaults.yaml in this case. We should open a PR for this as the defaults are still taken from MosaicBERT.