Trying to fine tune a transformer with spacy, getting a weird error despite copy-pasting from docs
Everytime I run the below code with the corresponding config I get the following error, telling me I used the wrong model architecture name despite listing that exact name in "available names" in the error message. This is my first time doing a custom config with spacy so I tried to follow the docs closely and keep it simple. still very new to this:
RegistryError: [E893] Could not find function '"spacy-transformers.TransformerModel.v3",' in function registry 'architectures'. If you're using a custom function, make sure the code is available. If the function is provided by a third-party package, e.g. spacy-transformers, make sure the package is installed in your environment.
Available names: spacy-legacy.CharacterEmbed.v1, spacy-legacy.EntityLinker.v1, spacy-legacy.HashEmbedCNN.v1, spacy-legacy.MaxoutWindowEncoder.v1, spacy-legacy.MishWindowEncoder.v1, spacy-legacy.MultiHashEmbed.v1, spacy-legacy.Tagger.v1, spacy-legacy.TextCatBOW.v1, spacy-legacy.TextCatCNN.v1, spacy-legacy.TextCatEnsemble.v1, spacy-legacy.Tok2Vec.v1, spacy-legacy.TransitionBasedParser.v1, spacy-transformers.Tok2VecTransformer.v1, spacy-transformers.Tok2VecTransformer.v2, spacy-transformers.Tok2VecTransformer.v3, spacy-transformers.TransformerListener.v1, spacy-transformers.TransformerModel.v1, spacy-transformers.TransformerModel.v2, spacy-transformers.TransformerModel.v3, spacy.CharacterEmbed.v2, spacy.EntityLinker.v2, spacy.HashEmbedCNN.v2, spacy.MaxoutWindowEncoder.v2, spacy.MishWindowEncoder.v2, spacy.MultiHashEmbed.v2, spacy.PretrainCharacters.v1, spacy.PretrainVectors.v1, spacy.SpanCategorizer.v1, spacy.SpanFinder.v1, spacy.Tagger.v2, spacy.TextCatBOW.v2, spacy.TextCatBOW.v3, spacy.TextCatCNN.v2, spacy.TextCatEnsemble.v2, spacy.TextCatLowData.v1, spacy.TextCatParametricAttention.v1, spacy.TextCatReduce.v1, spacy.Tok2Vec.v2, spacy.Tok2VecListener.v1, spacy.TorchBiLSTMEncoder.v1, spacy.TransitionBasedParser.v2
The training code: train("./modelConfigs/configTransformer.cfg", output_path='./clinBert_Embedd_CAT', overrides={"paths.train": "./trainData/trainingSet_textcat.spacy", "paths.dev": "./trainData/testSet_textcat.spacy", "components.transformer.model.name" : "./Bio_ClinicalBERT", "training.max_epochs": 15})
My config: [paths] train = "" dev = "" raw = null init_tok2vec = null vectors = null
[system] gpu_allocator = "pytorch" seed = 0
[nlp] lang = "en" pipeline = ["transformer", "doc_vector_producer"] tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"} before_creation = null after_creation = null after_pipeline_creation = null disabled = []
[components] [components.transformer] factory = "transformer"
[components.transformer.model] @architectures= "spacy-transformers.TransformerModel.v3", name= "./Bio_ClinicalBERT" tokenizer_config = {"use_fast": true} transformer_config = {} mixed_precision = true grad_scaler_config = {"init_scale": 32768}
[components.transformer.model.get_spans] @span_getters = "spacy-transformers.strided_spans.v1" window = 128 stride = 96
[components.doc_vector_producer] factory = "tok2vec"
[components.doc_vector_producer.model] @architectures = "spacy-transformers.TransformerListener.v1" upstream = "transformer" pooling = {"@layers":"reduce_mean.v1"}
grad_factor = 0.0
[components.doc_vector_producer.model.pooling] @layers = "reduce_mean.v1"
[corpora] [corpora.dev] @readers = "spacy.Corpus.v1" path = ${paths.dev} max_length = 512 gold_preproc = false limit = 0 augmenter = null
[corpora.train] @readers = "spacy.Corpus.v1" path = ${paths.train} max_length = 512 gold_preproc = false limit = 0 augmenter = null
[training] train_corpus = "corpora.train" dev_corpus = "corpora.dev" seed = ${system.seed} gpu_allocator = ${system.gpu_allocator} dropout = 0.1 accumulate_gradient = 1 patience = 1600 max_epochs = 60 eval_frequency = 200
[training.score_weights] cats_SIMILAR_f = 1.0 cats_NOT_SIMILAR_f = 1.0
[training.batcher] @batchers = "spacy.batch_by_padded.v1" discard_oversize = true size = 1000 buffer = 256 get_length = null
[training.logger] @loggers = "spacy.ConsoleLogger.v1" progress_bar = true
[training.optimizer] @optimizers = "Adam.v1" beta1 = 0.9 beta2 = 0.999 L2_is_weight_decay = true L2 = 0.01 grad_clip = 1.0 use_averages = false eps = 0.00000001
[training.optimizer.learn_rate] @schedules = "warmup_linear.v1" warmup_steps = 250 total_steps = 10000 initial_rate = 2e-5
[pretraining]
[initialize] vectors = ${paths.vectors} init_tok2vec = ${paths.init_tok2vec} vocab_data = null lookups = null before_init = null after_init = null
[initialize.components]
[initialize.tokenizer]
Your Environment
Info about spaCy
- spaCy version: 3.8.7
- Platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.16
- Pipelines: en_core_web_sm (3.8.0)
I tried doing it with a venv, I made sure to check for whitespace, quotes etc, I tried reinstalling spacy, I tried reinstalling spacy-transformers so theyd be perfectly compatible.