spacy-llm icon indicating copy to clipboard operation
spacy-llm copied to clipboard

🦙 Integrating LLMs into structured NLP pipelines

Results 31 spacy-llm issues
Sort by recently updated
recently updated
newest added

Hello, when `nlp = assemble("config.cfg")`, I get the following Traceback, I use the dolly config.cfg example from here : https://spacy.io/usage/large-language-models#usage. Thanks in advance ! ``` File [/usr/local/lib/python3.9/dist-packages/spacy_llm/util.py:48](https://file+.vscode-resource.vscode-cdn.net/usr/local/lib/python3.9/dist-packages/spacy_llm/util.py:48), in assemble(config_path, overrides)...

usage

Added Ollama support ## Description I added support for Ollama which can now be used in conjunction with `spacy-llm`. I added all the models currently supported as well, but perhaps...

feat/new
feat/model

## Description Add Bedrock Titan and Jurassic support ### Corresponding documentation PR (#338) ### Types of change New feature ## Checklist - [x] I confirm that I have the right...

feat/new
feat/model

## Description ### Corresponding documentation PR ### Types of change ## Checklist - [x] I confirm that I have the right to submit this contribution under the project's MIT license....

feat/new
feat/model

I'm utilising spacy-llm with GPT-3.5 Turbo 16k for NER (spacy.NER.v2). While the pipeline usually works as expected, identifying entities in doc.ents, there are instances where doc.ents returns empty, even though...

Inference fails with ``` TypeError: transformers.generation.utils.GenerationMixin.generate() got multiple values for keyword argument 'pad_token_id' ```. Cause for this is unclear so far. Workaround for the time being is to pin `transformers`...

bug
feat/model

While using the [spacy_llm rel_openai](https://github.com/explosion/spacy-llm/tree/main/usage_examples/rel_openai) example I continually run into the error: - `IndexError: [E035] Error creating span with start 9 and end -1 for Doc of length 16.` This...

with the following entry in my config file ``` [components.llm.model] @llm_models = "spacy.GPT-4.v3" name = "gpt-4-turbo" config = {"temperature": 0.0} context_length = 110000 ``` The context_length seems to be read...

I have downloaded the llama2 model to local path, but program always try downloaded the llama2 from huggingface, how do I let the program know the model in my local...

I really like spacy-llm but it is impossible for me to use it. I keep having connection time out with a working API key from OpenAI and after spending much...