Nathan Evans

Results 142 comments of Nathan Evans

Try using the [init command](https://microsoft.github.io/graphrag/posts/config/init/), which will create a settings.yml file

Thanks for the suggestion - we are adding LiteLLm support with #2051, but needed to also include a number of rate limiting and retry features to support larger pipeline runs.

We'll work to get the notebooks updated. In the meantime, one thing that might be helpful: if you set `snapshots.embeddings` to `True` in your settings.yml, we'll output a dataframe with...

The prompt templates are exported as text files in the project root, which is useful for editing/tuning, but those are not the per-call prompts. As @IT-Bill mentions, we do store...

If you set `model_supports_json: true` in your model config it should enforce JSON via the OpenAI API call (which should be compatible with 4o-mini). If you don't set that we...

We supply a pydantic model for JSON requests, which OpenAI guarantees will validate. That may not equate to a valid response if `None` is returned though. From [their post](https://openai.com/index/introducing-structured-outputs-in-the-api/) I...

I was wrong earlier - the version of fnllm we are on does not yet supply the Pydantic model to OpenAI. We try to parse and fit into the model....

Here is a user-submitted solution for fixing the community rating value. We normally expect a float, but some models will send an int. https://github.com/microsoft/graphrag/issues/772

@Mxk-1 has found chunking settings that help resolve issues with create_base_entity_graph with Ollama: >>>The chunk splitting in the original setting.yaml provided may not be suitable for the model launched with...

Note that as of GraphRAG 2.0 we allow injection of custom model providers. There are some details on how to do this in the new [models docs page](https://microsoft.github.io/graphrag/config/models/).