Nathan Evans
Nathan Evans
This is coming soon - OpenAI changed the format of their response, so we had to update our LLM management library. Should be releasing with 2.6.0 in the next couple...
2.6.0 is available now. Not only does it resolve the format issues from OpenAI, it also introduces the option to use LiteLLM to configure any model provider that they support.
Can you describe the differences in the prompt? Also, can you report on the number of extracted entities with the default prompt versus the tuned prompt? My first guess is...
Ok, thanks for the additional detail. There aren't any obvious reasons why the prompt would slow things down, other than entity/relationship counts - generally the more, the slower. Your results...
We see this issue filed commonly with models that return an unexpected format. Routing to the consolidated alternate model providers issue #657.
Closing due to inactivity after community response
Did you run prompt tuning? It looks like the issue is that prompt tune calls format to insert its variables, reducing the bracket count by one when written back out....
Consolidating language support issues here: #696
[Issue] Crashing at Entity Extraction using Ollama: KeyError: "['type', 'description'] not in index"
Routing to #657
Consolidating alternate model issues here: #657