bmaltais
bmaltais
Hard. to tell. Even commercial models like gpt-3.5-turbo are not providing mind blowing results when compared to something like Google's NotebookLM A lot of the time GraphRAG fail to provide...
@jgbradley1 Thank you for the info. I did create the prompt for the document using the auto-generated feature of graphrag. Still performed less than expected. Probably because I used gpt-3.5...
OK, I am able to reproduce the issue... investigating...
OK, found the reason why. Should now be fixed in the dev branch.
My understanding from the code is that unless you set it to something else than `snr` it won't be used: ``` parser.add_argument( "--loss_type", type=str, default="l2", choices=["l2", "huber", "smooth_l1"], help="The type...
The codebase has undergone numerous changes over time, making it challenging to pinpoint the exact source of differences in output. These variations could stem from multiple places within the sd-scripts,...
This is anoying... I just tried switching to ollama because... my 1st attempt at running the solution against chat-gpt costed me 45$ and did not work at the end... so...
Good news. I got it started. Key was to use the right config to set the concurrent request to 1: ``` llm: api_key: ${GRAPHRAG_API_KEY} type: openai_chat # or azure_openai_chat model:...
> I also managed to get the entity extraction working with Ollama. However, the embeddings seem to be more tricky due to no available OpenAI compatible API for embeddings from...
I configured mine as: ``` embeddings: ## parallelization: override the global parallelization settings for embeddings async_mode: threaded # or asyncio llm: api_key: ${GRAPHRAG_API_KEY} type: openai_embedding # or azure_openai_embedding model: nomic-embed-text...