flair icon indicating copy to clipboard operation
flair copied to clipboard

A very simple framework for state-of-the-art Natural Language Processing (NLP)

Results 253 flair issues
Sort by recently updated
recently updated
newest added

**Describe the bug** `from hyperopt import hp from flair.hyperparameter.param_selection import SearchSpace, Parameter search_space = SearchSpace() # define training hyperparameters search_space.add(Parameter.EMBEDDINGS, hp.choice, options=[document_embeddings]) search_space.add(Parameter.LEARNING_RATE, hp.choice, options=[0.01, 0.05, 0.1, 0.15, 0.2]) search_space.add(Parameter.MINI_BATCH_SIZE,...

bug

Hello, We want to use Flair to extract relations in sentences. Since we want to use our own German corpus for this, we created it following the comment in #2726...

question

To be removed, once it is done: Please add the appropriate label to this ticket, e.g. feature or enhancement. **Is your feature/enhancement request related to a problem? Please describe.** ONNX...

This is a follow-up of #2280 Flair users would benefit by being able to easily share their models and try them out with the Hub widgets or load models from...

Is trainer.resume() valid when finetuning is interrupted or it's just used with training? And if so, is using adam optimizer handled so momentum and scale vector aren't initialized from scratch?

question

Hi, this PR adds support for encoder-only fine-tuning T5 models. Supported models are T5, mT5 and LongT5. Unfortunately, ByT5 is currently not working, because this would require a change in...

I am working on a usecase where I need to generate document embeddings of abstracts of few articles. I am using TransformerDocumentEmbeddings to instantiate PubMedBert model. And, it generates 768...

question

Hi, I tried to fine-tune T5-base model on google colab and get this error ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") To be more specific where the error happens,...

bug

I want to resume training a NER model as shown in the tutorials, loading the model checkpoint but running it with : `trainer.resume(trained_model, base_path=path + '-resume', max_epochs=25, )` It simply...

bug