Resume-NER
Resume-NER copied to clipboard
Applying BERT for named entity recognition on resumes.
get below error as well, Anaconda3\lib\site-packages\torch\nn\functional.py", line 2846, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) ValueError: Expected input batch_size (500) to match target batch_size (1).
I cloned this repo and checkout "dependabot/pip/transformers-4.30.0" branch and when I ran, ```python pip install -r requirements.txt, I get below error, do i need to do anything else. Collecting numpy==1.18.4...
Bumps [transformers](https://github.com/huggingface/transformers) from 3.0.2 to 4.30.0. Release notes Sourced from transformers's releases. v4.30.0: 100k, Agents improvements, Safetensors core dependency, Swiftformer, Autoformer, MobileViTv2, timm-as-a-backbone 100k Transformers has just reached 100k stars...
Bumps [flask](https://github.com/pallets/flask) from 1.1.2 to 2.3.2. Release notes Sourced from flask's releases. 2.3.2 This is a security fix release for the 2.3.x release branch. Security advisory: https://github.com/pallets/flask/security/advisories/GHSA-m2qf-hxjv-5gpq, CVE-2023-30861 Changes: https://flask.palletsprojects.com/en/2.3.x/changes/#version-2-3-2...
Bumps [numpy](https://github.com/numpy/numpy) from 1.18.4 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...
we are getting below error while run the app.py \Anaconda3\lib\site-packages\transformers\models\bert\modeling_bert.py", line 950, in forward batch_size, seq_length = input_shape ValueError: not enough values to unpack (expected 2, got 1) kindly help...
RuntimeError: CUDA out of memory. Tried to allocate 92.00 MiB (GPU 0; 1.96 GiB total capacity; 1.46 GiB already allocated; 16.38 MiB free; 1.51 GiB reserved in total by PyTorch...
Hi, I had a doubt regarding resumes longer than 500 tokens. At the moment, it seems that we are truncating all data after 500 tokens, please clarify. Thanks