GLiNER
GLiNER copied to clipboard
Generalist and Lightweight Model for Named Entity Recognition (Extract any entity types from texts) @ NAACL 2024
I am working on fine-tuning a model and running into a "forgetful" situation I wanted to bring to your attention. The 2 changes we made to the finetuning Jupyter notebook...
In the previous version I was using [this](https://github.com/urchade/GLiNER/blob/main/examples/load_local_model.ipynb) example to load local model. But I needed to update `GLiNER`, because I needed to update `flair`, because previous `flair` version was...
I am unable to load the model from cache in offline mode. 1. I set environment variable HF_HOME to /.cache/huggingface/hub in the Dockerfile. 2. I copied the cached models from...
I followed exact steps from https://github.com/urchade/GLiNER/blob/main/examples/load_local_model.ipynb This takes close to 10 minutes to run . I read in other threads , the running time should be in seconds when I...
Here's my code for, which matches what I saw from the `examples/finetune.ipynb` in this repo, but adding the callback to early stop and load best model. ```python training_args = TrainingArguments(...
I am facing the error in the Training data arguments in the dataset in this line { training_args = TrainingArguments( output_dir="models", learning_rate=5e-6, weight_decay=0.01, others_lr=1e-5, others_weight_decay=0.01, lr_scheduler_type="linear", #cosine warmup_ratio=0.1, per_device_train_batch_size=8, per_device_eval_batch_size=8,...
When i try to load and run the onnx model, I am getting the following error message. I ran the code from https://github.com/urchade/GLiNER/blob/main/examples/convert_to_onnx.ipynb to save as onnx model. This is...
after installing gliner, below line gives an error > from gliner import GLiNER ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /tmp/ipykernel_1653/103435563.py in () ----> 1 from gliner import GLiNER...
AttributeError: 'DataParallel' object has no attribute 'device' Traceback: Skipping iteration due to error: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line...
Is it possible to export to ONNX and run inference without depending on PyTorch?